I'm trying to set up a bare metal Kubernetes cluster. I have the basic cluster set up, no problem, but I can't seem to get MetalLB working correctly to expose an external IP to a service.
My end goal is to be able to deploy an application with 2+ replicas and have a single IP/Port that I can reference in order to hit any of the running instances.
So far, what I've done (to test this out,) is:
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
kubectl apply -f metallb-layer-2.yml
kubectl run nginx --image=nginx --port=80
kubectl expose deployment nginx --type=LoadBalancer --name=nginx-service
metallb-layer-2.yml:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: k8s-master-ip-space
protocol: layer2
addresses:
- 192.168.0.240-192.168.0.250
and then when I run kubectl get svc
, I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service LoadBalancer 10.101.122.140 <pending> 80:30930/TCP 9m29s
No matter what I do, I can't get the service to have an external IP. Does anyone have an idea?
EDIT: After finding another post about using NodePort, I did the following:
iptables -A FORWARD -j ACCEPT
found here.
Now, unfortunately, when I try to curl the nginx endpoint, I get:
> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service LoadBalancer 10.101.122.140 192.168.0.240 80:30930/TCP 13h
> curl 192.168.0.240:30930
curl: (7) Failed to connect to 192.168.0.240 port 30930: No route to host
> curl 192.168.0.240:80
curl: (7) Failed to connect to 192.168.0.240 port 80: No route to host
I'm not sure what exactly this means now.
EDIT 2: When I do a TCP Dump on the worker where the pod is running, I get:
15:51:44.705699 IP 192.168.0.223.54602 > 192.168.0.240.http: Flags [S], seq 1839813500, win 29200, options [mss 1460,sackOK,TS val 375711117 ecr 0,nop,wscale 7], length 0
15:51:44.709940 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28
15:51:45.760613 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28
15:51:45.775511 IP 192.168.0.223.54602 > 192.168.0.240.http: Flags [S], seq 1839813500, win 29200, options [mss 1460,sackOK,TS val 375712189 ecr 0,nop,wscale 7], length 0
15:51:46.800622 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28
15:51:47.843262 IP 192.168.0.223.54602 > 192.168.0.240.http: Flags [S], seq 1839813500, win 29200, options [mss 1460,sackOK,TS val 375714257 ecr 0,nop,wscale 7], length 0
15:51:47.843482 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28
15:51:48.880572 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28
15:51:49.774953 ARP, Request who-has 192.168.0.240 tell 192.168.0.223, length 46
15:51:49.920602 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28
After going through with the MetalLB maintainer, he was able to figure out the issue was Debian Buster's new nftables firewall. To disable,
# update-alternatives --set iptables /usr/sbin/iptables-legacy
# update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
# update-alternatives --set arptables /usr/sbin/arptables-legacy
# update-alternatives --set ebtables /usr/sbin/ebtables-legacy
and restart the nodes in the cluster!