kubernetes master and worker nodes getting different ip range

11/10/2018

I have setup a local kubernetes cluster, using vagrant. Have assigned 2 nw interfaces for each vagrant box public and private.

kubectl get nodes -o wide

NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   
OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kubemaster   Ready    master   14h   v1.12.2   192.168.33.10   <none>        
Ubuntu 16.04.5 LTS   4.4.0-137-generic   docker://17.3.2
kubenode2    Ready    <none>   14h   v1.12.2   10.0.2.15       <none>        
Ubuntu 16.04.5 LTS   4.4.0-138-generic   docker://17.3.2

While initiating kubeadm on master, i ran ip advertise and gave ip as 192.168.33.10 of master.

My reall issue was i am not able to login to any pod.

kubectl exec -ti web /bin/bash

error: unable to upgrade connection: pod does not exist

-- batman
docker
kubernetes
overlay

1 Answer

11/10/2018

It's because vagrant, in its default configuration, will have a NAT public_network, usually eth0, and then any additional network interfaces -- such as what is likely a host-only interface on 192.168.33.10

You need to change the kubelet configuration -- and possibly your CNI provider -- to bind and advertise the IP address of kubenode2 that's in a subnet your machine can reach. Unidirectional traffic from kubenode2 can likely reach kubemaster over the NAT IP, but almost by definition your machine cannot reach anything behind the NAT IP, thus the connection failure when trying to reach the kubelet port

-- mdaniel
Source: StackOverflow