Kubernate: Unable to ping pod ip on other node

1/19/2020

Pod ips are only pinging from same node.

When i try pinging pod ip from other node/worker its not pinging.

master2@master2:~$ kubectl get pods --namespace=kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP                NODE      NOMINATED NODE   READINESS GATES
calico-kube-controllers-6ff8cbb789-lxwqq   1/1     Running   0          6d21h   192.168.180.2     master2   <none>           <none>
calico-node-4mnfk                          1/1     Running   0          4d20h   10.10.41.165      node3     <none>           <none>
calico-node-c4rjb                          1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
calico-node-dgqwx                          1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
calico-node-fhtvz                          1/1     Running   0          6d21h   10.10.41.161      node2     <none>           <none>
calico-node-mhd7w                          1/1     Running   0          4d21h   10.10.41.155      node1     <none>           <none>
coredns-8b5d5b85f-fjq72                    1/1     Running   0          45m     192.168.135.11    node3     <none>           <none>
coredns-8b5d5b85f-hgg94                    1/1     Running   0          45m     192.168.166.136   node1     <none>           <none>
etcd-master1                               1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
etcd-master2                               1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-apiserver-master1                     1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-apiserver-master2                     1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-controller-manager-master1            1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-controller-manager-master2            1/1     Running   2          6d21h   10.10.41.159      master2   <none>           <none>
kube-proxy-66nxz                           1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-proxy-fnrrz                           1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-proxy-lq5xp                           1/1     Running   0          6d21h   10.10.41.161      node2     <none>           <none>
kube-proxy-vxhwm                           1/1     Running   0          4d21h   10.10.41.155      node1     <none>           <none>
kube-proxy-zgwzq                           1/1     Running   0          4d20h   10.10.41.165      node3     <none>           <none>
kube-scheduler-master1                     1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-scheduler-master2                     1/1     Running   1          6d21h   10.10.41.159      master2   <none>           <none>

When i try ping pod with ip 192.168.104.8 on node2 from node 3 its fails and says 100% data loss

master1@master1:~/cluster$ sudo kubectl get pods  -o wide
NAME                         READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES

contentms-cb475f569-t54c2    1/1     Running   0          6d21h   192.168.104.1    node2   <none>           <none>
nav-6f67d5bd79-9khmm         1/1     Running   0          6d8h    192.168.104.8    node2   <none>           <none>
react                        1/1     Running   0          7m24s   192.168.135.12   node3   <none>           <none>
statistics-5668cd7dd-thqdf   1/1     Running   0          6d15h   192.168.104.4    node2   <none>           <none>
-- piyush
bare-metal-server
kubernetes
nodes
project-calico

1 Answer

1/22/2020

Its was routes issue.

I was using two ips for each node eth0 and eth1.

In routes it was using eth1 on place of eth0 ip.

I disabled eth1 ips and all worked.

-- piyush
Source: StackOverflow