Helm init for tiller deploy is stuck at ContainerCREATING status

10/24/2019
NAME                                 READY   STATUS              RESTARTS   AGE
coredns-5644d7b6d9-289qz             1/1     Running             0          76m
coredns-5644d7b6d9-ssbb2             1/1     Running             0          76m
etcd-k8s-master                      1/1     Running             0          75m
kube-apiserver-k8s-master            1/1     Running             0          75m
kube-controller-manager-k8s-master   1/1     Running             0          75m
kube-proxy-2q9k5                     1/1     Running             0          71m
kube-proxy-dz9pk                     1/1     Running             0          76m
kube-scheduler-k8s-master            1/1     Running             0          75m
tiller-deploy-7b875fbf86-8nxmk       0/1     ContainerCreating   0          17m
weave-net-nzb67                      2/2     Running             0          75m
weave-net-t8kmk                      2/2     Running             0          71m

Installed Kubernates version v1.16.2 but when installing tiller using new service account it is strucking at Container creating. Tried all the solutions such as RBAC, Removing tiller role and do it again, reinstalling kubernates etc.

Output for Kubectl describe is as follows.

[02:32:50] root@k8s-master$ kubectl describe pods  tiller-deploy-7b875fbf86-8nxmk  --namespace  kube-system
Name:           tiller-deploy-7b875fbf86-8nxmk
Namespace:      kube-system
Priority:       0
Node:           worker-node1/172.17.0.1
Start Time:     Thu, 24 Oct 2019 14:12:45 -0400
Labels:         app=helm
                name=tiller
                pod-template-hash=7b875fbf86
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/tiller-deploy-7b875fbf86
Containers:
  tiller:
    Container ID:
    Image:          gcr.io/kubernetes-helm/tiller:v2.15.1
    Image ID:
    Ports:          44134/TCP, 44135/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TILLER_NAMESPACE:    kube-system
      TILLER_HISTORY_MAX:  0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from tiller-token-rr2jg (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  tiller-token-rr2jg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  tiller-token-rr2jg
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                From                   Message
  ----     ------                  ----               ----                   -------
  Normal   Scheduled               <unknown>          default-scheduler      Successfully assigned kube-system/tiller-deploy-7b875fbf86-8nxmk to worker-node1
  Warning  FailedCreatePodSandBox  61s (x5 over 17m)  kubelet, worker-node1  Failed create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Normal   SandboxChanged          60s (x5 over 17m)  kubelet, worker-node1  Pod sandbox changed, it will be killed and re-created.
[~]
-- Amrit Pal Singh
kubernetes
kubernetes-helm

1 Answer

10/25/2019

FailedCreatePodSandBox

Means that worker-node1 (172.17.0.1) does not have CNI installed or configured, and is a frequently asked question. Whatever mechanism you used to install kubernetes did not do a robust job, or else you missed a step along the way.

I also have pretty high confidence that your kubelet logs on worker-node1 are filled with error messages, if you were to actually look at them.

-- mdaniel
Source: StackOverflow