Can anybody explain me how MetalLB gets IP addresses in a Kubernetes environment? I have installed Kubernetes cluster in GCP compute engines. I have provided a range of Internal IP addresses in MetalLB ConfigMap.
NAME STATUS INTERNAL-IP EXTERNAL-IP
instance-1 Ready 10.140.0.20 56.169.53.26
instance-2 Ready 10.140.0.21 57.11.92.241
instance-3 Ready 10.140.0.22 54.7.255.253
In my case the IP range which I gave in the CM was 10.140.0.30-10.140.0.40
It works as expected but I want to know how MetalLB get's IP addresses.
to summarize my comments:
MetalLB in layer 2 mode is deploying on each node a Speaker Pod which responds to ARP(IPv4) and NDP(IPv6) requests.
If you now connect to the IP, which your Kubernetes Service with type: LoadBalancer
got from the range you have defined in the MetalLB configuraton, your client will send out an arp-request who-has <IP-Service>, tell <IP-Client>
to the Network.
Since the Speaker Pods are listening to arp-requests, they'll answer with reply <IP-Service> is-at <node-MAC-address-of-the-leader>
.
It does not mean, that your Pod is running on that Node which the Mac-Address is resolved, only the MetalLB "leader" is running on this one. Your request will pass then over to the Kube-Proxy which is aware where your Pod lives.
Also keep in mind:
In that sense, layer 2 does not implement a load-balancer. Rather, it implements a failover mechanism so that a different node can take over should the current leader node fail for some reason.
https://metallb.universe.tf/concepts/layer2/#load-balancing-behavior