Flannel is crashing for Slave node

5/15/2020

I am getting this result for flannel service on my slave node. Flannel is running fine on master node.

kube-system   kube-flannel-ds-amd64-xbtrf      0/1     CrashLoopBackOff   4          3m5s

Kube-proxy running on the slave is fine but not the flannel pod.

I have a master and a slave node only. At first its say running, then it goes to error and finally, crashloopbackoff.

godfrey@master:~$ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                             READY   STATUS             RESTARTS   AGE     IP                NODE     NOMINATED NODE   READINESS GATES
kube-system   kube-flannel-ds-amd64-jszwx      0/1     CrashLoopBackOff   4          2m17s   192.168.152.104   slave3   <none>           <none>
kube-system   kube-proxy-hxs6m                 1/1     Running            0          18m     192.168.152.104   slave3   <none>           <none>

I am also getting this from the logs:

I0515 05:14:53.975822       1 main.go:390] Found network config - Backend type: vxlan
I0515 05:14:53.975856       1 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
E0515 05:14:53.976072       1 main.go:291] Error registering network: failed to acquire lease: node "slave3" pod cidr not assigned
I0515 05:14:53.976154       1 main.go:370] Stopping shutdownHandler...

I could not find a solution so far. Help appreciated.

-- Godfrey Tan
kubernetes

1 Answer

5/25/2020

As solution came from OP, I'm posting answer as community wiki.

As reported by OP in the comments, he didn't passed the podCIDR during kubeadm init.

The following command was used to see that the flannel pod was in "CrashLoopBackoff" state:

sudo kubectl get pods --all-namespaces -o wide

To confirm that podCIDR was not passed to flannel pod kube-flannel-ds-amd64-ksmmh that was in CrashLoopBackoff state.

$ kubectl logs kube-flannel-ds-amd64-ksmmh

kubeadm init --pod-network-cidr=172.168.10.0/24 didn't pass the podCIDR to the slave nodes as expected.

Hence to solve the problem, kubectl patch node slave1 -p '{"spec":{"podCIDR":"172.168.10.0/24"}}' command had to be used to pass podCIDR to each slave node.

Please see this link: coreos.com/flannel/docs/latest/troubleshooting.html and section "Kubernetes Specific"

-- mWatney
Source: StackOverflow