Pods running in Kubernetes slave node are in ContainerCreating state

6/28/2018

I was successfully able to setup the Kubernetes master . I created the Kubernetes slave node by installing Docker and kubelet (using kubeadm) . After running the join command the slave node is joined to the cluster . I am able to verify that from master node . But the pods that are getting deployed in slave node is always in ContainerCreating state . Apart from docker and kubelet is there anything else needs to be installed in slave node ??

Status of kubectl shows that remote_runtime.go: RunPodSandBox from runtime service failed : rpc error : code= DeadlineExceeded

Appreciate your help .

-- Balakumar Ezhilmaran
amazon-ec2
kubeadm
kubernetes

1 Answer

6/29/2018

In such cases I would usually start to troubleshoot cluster by checking the state of pods in kube-system namespace using the command:

$ kubectl get pods --all-namespaces -o wide

There should be several pods related to networking, running on each node, e.g:

NAMESPACE     NAME                    READY     STATUS    RESTARTS   AGE       IP               NODE
kube-system   calico-node-2rpns       2/2       Running   0          2h        10.154.0.5       kube-node1
kube-system   calico-node-cn6cl       2/2       Running   0          2h        10.154.0.6       kube-master
kube-system   calico-node-fr7v5       2/2       Running   1          2h        10.154.0.7       kube-node2

Full set of networking container depends on what Kubernetes network solution is used.

Next, I check if there are some pods in “Not Ready” state and check the errors in the description:

$ kubectl describe pod not-ready-pod-name

In case there are errors related to image pulling or container creating I check the kubelet logs on the node for more details:

$ journalctl -u kubelet

or try to pull image manually to ensure that image is available and can be pulled:

$ docker pull <image>

In case pod has many restarts I go to check pod's container logs:

$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}

or logs of previous crashed container:

$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}

My next steps depend on the previous results.

If you add your results to the question, it would be possible to tell you more about the case.

-- VAS
Source: StackOverflow