Pod's status is always ContainerCreating. . Events show 'Failed create pod sandbox'

1/14/2018

I am trying to create a deployment on a K8s cluster with one master and two worker nodes. The cluster is running on 3 AWS EC2 instances. I have been using this environment for quite sometime to play with Kubernetes. Three days back, I have started to see all the pods status to change to ContainerCreating from Running. Only the pods that are scheduled on master are shown as Running. The pods running on worker nodes are shown as ContainerCreating. When I run kubectl describe pod <podname>, it shows in the event the following

 Events:
  Type     Reason                  Age   From                      Message
  ----     ------                  ----  ----                      -------
  Normal   Scheduled               34s   default-scheduler         Successfully assigned nginx-8586cf59-5h2dp to ip-172-31-20-57
  Normal   SuccessfulMountVolume   34s   kubelet, ip-172-31-20-57  MountVolume.SetUp succeeded for volume "default-token-wz7rs"
  Warning  FailedCreatePodSandBox  4s    kubelet, ip-172-31-20-57  Failed create pod sandbox.
  Normal   SandboxChanged          3s    kubelet, ip-172-31-20-57  Pod sandbox changed, it will be killed and re-created.

This error has been bugging me now. I tried to search around online on related error but I couldn't get anything specific. I did kubeadm reset on the cluster including master and worker nodes and brought up the cluster again. The nodes status shows ready. But I run into the same problem again whenever I try to create a deployment using the below command for example:

kubectl run nginx --image=nginx --replicas=2
-- userNB13
kubernetes

3 Answers

2/19/2019

I run k8s on a few DO droplets and was stuck on this very issue. No other info was given - just the FailedCreatePodSandBox complaining about a file I had never seen before.

Spent a lotta time trying to figure it out - the only thing that fixed the issue for me was restarting my master and each node in their entirety. That got things going instantly.

sudo shutdown -r now

-- adstwlearn
Source: StackOverflow

9/11/2018

This can occur if you specify a limit or request on memory and use the wrong unit.

Below triggered the message:

resources:
   limits:
      cpu: "300m"
     memory: "256m"
   requests:
     cpu: "50m"
     memory: "64m"

The correct line would be:

resources:
   limits:
      cpu: "300m"
     memory: "256Mi"
   requests:
     cpu: "50m"
     memory: "64Mi"
-- Jonathan
Source: StackOverflow

8/11/2018

It might someone else, but I've spent a weekend on this until I noticed I had requested 1000 mem, insted of 1000Mi...

-- frbl
Source: StackOverflow