Kubernetes worker nodes not automatically being assigned podCidr on kubeadm join

10/3/2018

I have a multi-master Kubernetes cluster set up, with one worker node. I set up the cluster with kubeadm. On kubeadm init, I passed the -pod-network-cidr=10.244.0.0/16 (using Flannel as the network overlay).

When using kubeadm join on the first worker node, everything worked properly. For some reason when trying to add more workers, none of the nodes are automatically assigned a podCidr.

I used this document to manually patch each worker node, using the kubectl patch node <NODE_NAME> -p '{"spec":{"podCIDR":"<SUBNET>"}}' command and things work fine.

But this is not ideal, I am wondering how I can fix my setup so that just adding the kubeadm join command will automatically assign the podCidr.

Any help would be greatly appreciated. Thanks!

Edit:

I1003 23:08:55.920623       1 main.go:475] Determining IP address of default interface

I1003 23:08:55.920896       1 main.go:488] Using interface with name eth0 and address 

I1003 23:08:55.920915       1 main.go:505] Defaulting external address to interface address ()

I1003 23:08:55.941287       1 kube.go:131] Waiting 10m0s for node controller to sync

I1003 23:08:55.942785       1 kube.go:294] Starting kube subnet manager

I1003 23:08:56.943187       1 kube.go:138] Node controller sync successful

I1003 23:08:56.943212       1 main.go:235] Created subnet manager: 

Kubernetes Subnet Manager - kubernetes-worker-06

I1003 23:08:56.943219       1 main.go:238] Installing signal handlers

I1003 23:08:56.943273       1 main.go:353] Found network config - Backend type: vxlan

I1003 23:08:56.943319       1 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false

E1003 23:08:56.943497       1 main.go:280] Error registering network: failed to acquire lease: node "kube-worker-02" pod cidr not assigned

I1003 23:08:56.943513       1 main.go:333] Stopping shutdownHandler...
-- user10452751
flannel
kubeadm
kubernetes

2 Answers

11/26/2019

I'm using kubernetes v1.16 with docker-ce v17.05. The thing is, I only have one master node, which is inited with --pod-network-cidr option.

The flannel pod on another worker node failed to syncing, according to kubelet log under /var/log/message. Checking this pod (with docker logs <container-id>), it turned out that node <NODE_NAME> pod cidr not assigned.

I fixed it by manually set the podCidr to the worker node, according to this doc

Although I've not yet figured out why this manually set-up is required, because as the the doc pointed out:

If kubeadm is being used then pass --pod-network-cidr=10.244.0.0/16 to kubeadm init which will ensure that all nodes are automatically assigned a podCIDR.

-- AssKicker
Source: StackOverflow

10/5/2018

I was able to solve my issue. In my multi-master setup, on one of my master nodes, the kube-controller-manager.yaml (in /etc/kubernetes/manifest) file was missing the two following fields:

  • --allocate-node-cidrs=true
  • --cluster-cidr=10.244.0.0/16

Once adding these fields to the yaml, I reset the kubelet service and everything was working great when trying to add a new worker node.

This was a mistake on my part, because when initializing one of my master nodes with kubeadm init, I must of forgot to pass the --pod-network-cidr. Oops.

Hope this helps someone out there!

-- user10452751
Source: StackOverflow