FailedCreatePodSandBox & kubelet, $(Slave name) Pod sandbox changed, it will be killed and re-created

4/14/2019

I am running a kubernetes cluter with 6 nodes (cluser-master & kubernetes slave0-4) on "Bionic Beaver"ubuntu and I'm using Weave To install kubernetes I followed https://vitux.com/install-and-deploy-kubernetes-on-ubuntu/ and installed weave after installing whatever was reccommended here after clean removing it(it doesn't show up in my pods anymore)

kubectl get pods --all-namespaces returns:

NAMESPACE     NAME                                     READY   STATUS              RESTARTS   AGE
kube-system   coredns-fb8b8dccf-g8psp                  0/1     ContainerCreating   0          77m
kube-system   coredns-fb8b8dccf-pbfr7                  0/1     ContainerCreating   0          77m
kube-system   etcd-cluster-master                      1/1     Running             5          77m
kube-system   kube-apiserver-cluster-master            1/1     Running             5          77m
kube-system   kube-controller-manager-cluster-master   1/1     Running             5          77m
kube-system   kube-proxy-72s98                         1/1     Running             5          73m
kube-system   kube-proxy-cqmdm                         1/1     Running             3          63m
kube-system   kube-proxy-hgnpj                         1/1     Running             0          69m
kube-system   kube-proxy-nhjdc                         1/1     Running             5          72m
kube-system   kube-proxy-sqvdd                         1/1     Running             5          77m
kube-system   kube-proxy-vmg9k                         1/1     Running             0          70m
kube-system   kube-scheduler-cluster-master            1/1     Running             5          76m
kube-system   kubernetes-dashboard-5f7b999d65-p7clv    0/1     ContainerCreating   0          61m
kube-system   weave-net-2brvt                          2/2     Running             0          69m
kube-system   weave-net-5wlks                          2/2     Running             16         72m
kube-system   weave-net-65qmd                          2/2     Running             16         73m
kube-system   weave-net-9x8cz                          2/2     Running             9          63m
kube-system   weave-net-r2nhz                          2/2     Running             15         75m
kube-system   weave-net-stq8x                          2/2     Running             0          70m

and if I go with kubectl describe $(kube dashboard pod name) --namespace=kube-system it returns:

NAME                                    READY   STATUS              RESTARTS   AGE
kubernetes-dashboard-5f7b999d65-p7clv   0/1     ContainerCreating   0          64m
rock64@cluster-master:~$
rock64@cluster-master:~$ kubectl describe pods kubernetes-dashboard-5f7b999d65-p7clv --namespace=kube-system
Name:               kubernetes-dashboard-5f7b999d65-p7clv
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               kubernetes-slave1/10.0.0.215
Start Time:         Sun, 14 Apr 2019 10:20:42 +0000
Labels:             k8s-app=kubernetes-dashboard
                    pod-template-hash=5f7b999d65
Annotations:        <none>
Status:             Pending
IP:
Controlled By:      ReplicaSet/kubernetes-dashboard-5f7b999d65
Containers:
  kubernetes-dashboard:
    Container ID:
    Image:         k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
    Image ID:
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-znrkr (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kubernetes-dashboard-token-znrkr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-znrkr
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                     From                        Message
  ----     ------                  ----                    ----                        -------
  Normal   Scheduled               64m                     default-scheduler           Successfully assigned kube-system/kubernetes-dashboard-5f7b999d65-p7clv to kubernetes-slave1
  Warning  FailedCreatePodSandBox  64m                     kubelet, kubernetes-slave1  Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "4e6d9873f49a02e86cef79e338ce97162291897b2aaad1ddb99c5e066ed42178" network for pod "kubernetes-dashboard-5f7b999d65-p7clv": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-5f7b999d65-p7clv_kube-system" network: failed to find plugin "loopback" in path [/opt/cni/bin], failed to clean up sandbox container "4e6d9873f49a02e86cef79e338ce97162291897b2aaad1ddb99c5e066ed42178" network for pod "kubernetes-dashboard-5f7b999d65-p7clv": NetworkPlugin cni failed to teardown pod "kubernetes-dashboard-5f7b999d65-p7clv_kube-system" network: failed to find plugin "portmap" in path [/opt/cni/bin]]
  Normal   SandboxChanged          59m (x25 over 64m)      kubelet, kubernetes-slave1  Pod sandbox changed, it will be killed and re-created.
  Normal   SandboxChanged          49m (x18 over 53m)      kubelet, kubernetes-slave1  Pod sandbox changed, it will be killed and re-created.
  Normal   SandboxChanged          46m (x13 over 48m)      kubelet, kubernetes-slave1  Pod sandbox changed, it will be killed and re-created.
  Normal   SandboxChanged          24m (x94 over 44m)      kubelet, kubernetes-slave1  Pod sandbox changed, it will be killed and re-created.
  Normal   SandboxChanged          4m12s (x26 over 9m52s)  kubelet, kubernetes-slave1  Pod sandbox changed, it will be killed and re-created.```
-- MatrixKiller420
kubernetes

1 Answer

4/15/2019

failed to find plugin "loopback" in path [/opt/cni/bin]

As the helpful message is trying to explain to you, it appears you have a botched CNI installation. Anytime you see FailedCreatePodSandBox or SandboxChanged, it's always(?) related to a CNI failure.

The very short version is to grab the latest CNI plugins package, unpack it into /opt/cni/bin, ensure they are executable, and restart ... uh, likely the machine, but certainly the offending Pod and most likely kubelet, too.

p.s. You will have a much nicer time here on SO by conducing a little searching, as this is a very common question

-- mdaniel
Source: StackOverflow