kubeadm init starts cluster with incorrect IP addresses

7/27/2020

I initialize a 5 node k8s cluster as below:

[root@lpdkubpoc01a ~]# kubeadm init --pod-network-cidr=10.96.0.0/16 --service-cidr=10.97.0.0/16 --image-repository quaytest.phx.aexp.com/control-plane
W0727 15:19:51.123991    1866 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: dial tcp: lookup dl.k8s.io on 10.2.88.196:53: no such host
W0727 15:19:51.124080    1866 version.go:102] falling back to the local client version: v1.17.5
W0727 15:19:51.124236    1866 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0727 15:19:51.124244    1866 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.5
[preflight] Running pre-flight checks
...
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:
```bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster. Run "kubectl apply -f podnetwork.yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.22.76.244:6443 --token fa5ia8.oqs7jv9ii6wzex0w \
    --discovery-token-ca-cert-hash sha256:6680c99e6c49e0dce4522bc9768bfc2e7e2b38f5a10668d3a544554ab0d09ff1

I run the below, per the above instructions:

[root@lpdkubpoc01a ~]# mkdir -p $HOME/.kube
[root@lpdkubpoc01a ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite ‘/root/.kube/config’? y
[root@lpdkubpoc01a ~]# chown $(id -u):$(id -g) $HOME/.kube/config

But when I check the control plane component pods, I see them all initialized with 1) same IP address 2) incorrect CIDR => seems to be in the host network which is a big NO-NO

```bash
[root@lpdkubpoc01a ~]# kubectl get pods -n kube-system -owide
NAME                                                READY   STATUS    
RESTARTS   AGE   IP             NODE                        NOMINATED NODE   READINESS GATES
coredns-598947db54-dzrjk                            0/1     Pending   0          37s   <none>         <none>                      <none>           <none>
coredns-598947db54-t2wch                            0/1     Pending   0          37s   <none>         <none>                      <none>           <none>
etcd-lpdkubpoc01a.phx.aexp.com                      1/1     Running   0          50s   10.22.76.244   lpdkubpoc01a.phx.aexp.com   <none>           <none>
kube-apiserver-lpdkubpoc01a.phx.aexp.com            1/1     Running   0          50s   10.22.76.244   lpdkubpoc01a.phx.aexp.com   <none>           <none>
kube-controller-manager-lpdkubpoc01a.phx.aexp.com   1/1     Running   0          50s   10.22.76.244   lpdkubpoc01a.phx.aexp.com   <none>           <none>
kube-proxy-8dbx2                                    1/1     Running   0          38s   10.22.76.244   lpdkubpoc01a.phx.aexp.com   <none>           <none>
kube-scheduler-lpdkubpoc01a.phx.aexp.com            1/1     Running   0          50s   10.22.76.244   lpdkubpoc01a.phx.aexp.com   <none>           <none>
```

What is wrong and how do I remedy it? The pods in the kube-system ns should not have the same IP and definitely not in the same network as the host:

[root@lpdkubpoc01a ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0
        ether 02:42:40:17:25:e4  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

**eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.22.76.244  netmask 255.255.254.0  broadcast 10.22.77.255
        ether 00:50:56:b8:e1:84  txqueuelen 1000  (Ethernet)
        RX packets 73810789  bytes 8755922965 (8.1 GiB)
        RX errors 0  dropped 31388  overruns 0  frame 0
        TX packets 44487774  bytes 12389932340 (11.5 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0**

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
        inet 10.0.195.100  netmask 255.255.254.0  broadcast 10.0.195.255
        ether 00:50:56:b8:6c:23  txqueuelen 1000  (Ethernet)
        RX packets 3573616  bytes 708218742 (675.4 MiB)
        RX errors 0  dropped 50118  overruns 0  frame 0
        TX packets 830522  bytes 174979700 (166.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 263222455  bytes 44942504690 (41.8 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 263222455  bytes 44942504690 (41.8 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Thanks!

-- user2405589
calico
coredns
docker
kubeadm
kubernetes

3 Answers

9/29/2020

control plane components are statics pods, you can see their yaml files in /etc/kubernetes/manifest/ directory and kubelete of the master node is responsible for checking them to be running. So it is normal for static pods to have their local host's IP.

By the way you still need a kubernetes network addons like Calico or Weave net. check this link.<br> https://kubernetes.io/docs/concepts/cluster-administration/addons/ <br> For example in your case, you just need to run this command to deploy weave net addons with your desire pod-network-cidr: <br><br>

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=10.96.0.0/16" <br><br> Be careful to specify your pod-network-cidr when deploy network addons.

-- mhafshari
Source: StackOverflow

12/24/2020

its normal for the controlplane pods to have node ip's. everything else will get ip's from the cni.

-- Paul Ma
Source: StackOverflow

7/27/2020

It doesn't even look like your pod network has been configured. You can install something like Calico or Weave. Your coredns pods should come up after that and also your other pods should get different IP addresses.

In the past, these instructions were on the main kubeadm page, but it's my understanding that they have actually been deprecated in favor of standardizing more on CNI and letting each of one the CNI providers provide their own installation instructions.

✌️

-- Rico
Source: StackOverflow