Can't to able to ping from pod deployed on worker node

6/6/2019

I have one master Node and one worker Node

On worker Node, i just ran 2 commands

a)kubeadm reset
b)kubeadm join ......... ..... ..... ....

So do I need to do anything like I did following on master Node?

a)kubeadm init
b)kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

As I had not done kubeadm init because i think it will become master node, anyway i am not able to use any command like kubectl get nodes,kubectl get pods.

My Master Node and Worker Node status are ready

I deployed one pod on master node and i am able to do ping www.google.com

but when i deployed one pod using

spec:
     nodeSelector :
       nodeName : nodeName

So i successfully deployed pod on Worker Node using label

But i can't able to ping inside pod

Output of commands from Master Node :-

aquilak8suser@ip-172-31-6-149:/$ kubectl get nodes
NAME              STATUS   ROLES    AGE     VERSION
ip-172-31-11-87   Ready    <none>   4h35m   v1.13.3
ip-172-31-6-149   Ready    master   11h     v1.13.3




aquilak8suser@ip-172-31-6-149:/$ kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE              NOMINATED NODE   READINESS GATES
calico-kube-controllers-5f454f49dd-75r5w   1/1     Running   0          11h     192.168.179.67   ip-172-31-6-149   <none>           <none>
calico-node-298r4                          0/1     Running   0          11h     172.31.6.149     ip-172-31-6-149   <none>           <none>
calico-node-5979v                          0/1     Running   0          4h37m   172.31.11.87     ip-172-31-11-87   <none>           <none>
coredns-86c58d9df4-6rzt2                   1/1     Running   0          11h     192.168.179.65   ip-172-31-6-149   <none>           <none>
coredns-86c58d9df4-722tb                   1/1     Running   0          11h     192.168.179.66   ip-172-31-6-149   <none>           <none>
etcd-ip-172-31-6-149                       1/1     Running   0          11h     172.31.6.149     ip-172-31-6-149   <none>           <none>
kube-apiserver-ip-172-31-6-149             1/1     Running   0          11h     172.31.6.149     ip-172-31-6-149   <none>           <none>
kube-controller-manager-ip-172-31-6-149    1/1     Running   0          11h     172.31.6.149     ip-172-31-6-149   <none>           <none>
kube-proxy-496gh                           1/1     Running   0          4h37m   172.31.11.87     ip-172-31-11-87   <none>           <none>
kube-proxy-7684r                           1/1     Running   0          11h     172.31.6.149     ip-172-31-6-149   <none>           <none>
kube-scheduler-ip-172-31-6-149             1/1     Running   0          11h     172.31.6.149     ip-172-31-6-149   <none>           <none>

aquilak8suser@ip-172-31-6-149:/$ kubectl logs coredns-86c58d9df4-6rzt2 --tail=200 -n kube-system
.:53
2019-06-06T04:20:31.271Z [INFO] CoreDNS-1.2.6
2019-06-06T04:20:31.271Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769


root@spring-boot-demo-pricing-66f668cbb4-q5dc2:/# cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal
options ndots:5
root@spring-boot-demo-pricing-66f668cbb4-q5dc2:/#
-- Dhanraj
kubernetes
kubernetes-pod

1 Answer

6/6/2019
  1. No you need not to run kubeadm init or kubectl apply -f "https://cloud.weave...... in worker nodes. Check

  2. To use kubectl commands from worker nodes you need to transfer the /etc/kubernetes/admin.conf file to worker nodes and put it in /{username}/.kube/config

scp /etc/kubernetes/admin.conf {workerNoderUser}@{workerNoderIP}:/{username}/.kube/config

once you transfer the config you can run kubectl commands in worker nodes as well.

  1. There can be many reasons for not being able to ping from the worker node's pod. First check if your worker node itself can ping to google.com or not. if that works then check your cluster dns which is kube-dns or coredns, check their logs and if they are healthy. you may also try to delete the /etc/resolv.conf and add public dns servers like that of google's (8.8.8.8). Lastly you can follow this
-- garlicFrancium
Source: StackOverflow