Unable to do sh/bash to any pods created in newly attached worker node in kubernetes cluster

1/16/2020

Added kube-node02 to the existing cluster with kubeadm join. Pod scheduling is working fine to those nodes, but unable to get interactive terminal of those pods.

vagrant@kube-master01:~$ k get po -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP          NODE          NOMINATED NODE   READINESS GATES
dns-test-busybox        1/1     Running   0          37m   10.47.0.3   kube-node02   <none>           <none>
label-pod-demo          1/1     Running   0          54m   10.44.0.5   kube-node01   <none>           <none>
nginx-dns               1/1     Running   0          43m   10.47.0.2   kube-node02   <none>           <none>
pod-demo-label-node02   1/1     Running   0          46m   10.47.0.1   kube-node02   <none>           <none>

Pod deployed in node01

vagrant@kube-master01:~$ k exec -it label-pod-demo sh
# ^C
# 

Pod deployed in node02

vagrant@kube-master01:~$ k exec -it pod-demo-label-node02 sh
error: unable to upgrade connection: pod does not exist
-- JithZ
docker
kubeadm
kubectl
kubernetes
ubuntu

1 Answer

1/16/2020

Luckily i found the issue, in k get nodes -o wide node2 was showing a different ip range which caused this error.

vagrant@kube-master01:~$ k get nodes -o wide
NAME            STATUS   ROLES    AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kube-master01   Ready    master   6d13h   v1.17.0   192.168.50.10   <none>        Ubuntu 18.04.3 LTS   4.15.0-58-generic   docker://19.3.4
kube-node01     Ready    <none>   6d13h   v1.17.0   192.168.50.11   <none>        Ubuntu 18.04.3 LTS   4.15.0-58-generic   docker://19.3.4
kube-node02     Ready    <none>   56m     v1.17.0   10.0.2.15       <none>        Ubuntu 18.04.3 LTS   4.15.0-58-generic   docker://19.3.4

Fixed by adding Environment="KUBELET_EXTRA_ARGS=--node-ip=192.168.50.21" in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf then sudo systemctl daemon-reload and sudo systemctl restart kubelet. Now it is showing correct ip and able to do ssh.

vagrant@kube-master01:~$ k get nodes -o wide
NAME            STATUS   ROLES    AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kube-master01   Ready    master   6d13h   v1.17.0   192.168.50.10   <none>        Ubuntu 18.04.3 LTS   4.15.0-58-generic   docker://19.3.4
kube-node01     Ready    <none>   6d13h   v1.17.0   192.168.50.11   <none>        Ubuntu 18.04.3 LTS   4.15.0-58-generic   docker://19.3.4
kube-node02     Ready    <none>   62m     v1.17.0   192.168.50.21   <none>        Ubuntu 18.04.3 LTS   4.15.0-58-generic   docker://19.3.4
vagrant@kube-master01:~$ k exec -it pod-demo-label-node02 sh
# 
vagrant@kube-master01:~$

Not sure why this question is down voted. It was an issue which i faced so thought of posting it. Not sure why this question is unuseful/unclear. They should comment at least so we can understand!!

-- JithZ
Source: StackOverflow