I recently create an Azure Kubernetes Service (AKS). I follow this tutorial to create and use an NGINX on my AKS and I create my volumes and deployment. This files already work on another cluster. But for this one, my pods stays in status ContainerCreating
Name: sw-bo-9f7cc5d7d-tms2p
Namespace: default
Node: aks-agentpool-34372919-2/172.16.35.194
Start Time: Thu, 23 Aug 2018 16:38:44 +0200
Labels: app=sw-bo
pod-template-hash=593771838
Annotations:
Status: Pending
IP:
Controlled By: ReplicaSet/sw-bo-9f7cc5d7d
Containers:
sw-bo:
Container ID:
Image: captaincontainerregdev.azurecr.io/sw-bo:latest
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
memory: 2Gi
Requests:
memory: 1Gi
Environment:
APP_SLEEP: 10
SPRING_DATASOURCE_URL: jdbc:postgresql://*************.postgres.database.azure.com:5432/swbo
SPRING_PROFILES_ACTIVE: prod,swagger
SPRING_DATASOURCE_USERNAME: Optional: false
SPRING_DATASOURCE_PASSWORD: Optional: false
APPLICATION_PROXY_BASE_URL: https://*************.westeurope.cloudapp.azure.com/api
APPLICATION_GEOSERVER_URL: https://*************.westeurope.cloudapp.azure.com/geoserver
APPLICATION_GEOSERVER_WORKSPACE: *************
APPLICATION_NODERED_URL: https://*************.westeurope.cloudapp.azure.com/nodered
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hwhhj (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-hwhhj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hwhhj
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned sw-bo-9f7cc5d7d-tms2p to aks-agentpool-34372919-2
Normal SuccessfulMountVolume 1m kubelet, aks-agentpool-34372919-2 MountVolume.SetUp succeeded for volume "default-token-hwhhj"
Warning FailedCreatePodContainer 7s (x10 over 2m) kubelet, aks-agentpool-34372919-2 unable to ensure pod container exists: failed to create container for /kubepods/burstable/pod3ae78151-a6e2-11e8-a1e7-5a18d5725d34 : mountpoint for devices not found
I started thinking it was a problem on my files but all pods in kube-system failed to same error.
root@frparcaptainpil:~/azure# kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
addon-http-application-routing-default-http-backend-556559l4h52 0/1 Error 3 1d
addon-http-application-routing-external-dns-59db8c7666-txlf2 0/1 Error 3 1d
addon-http-application-routing-nginx-ingress-controller-86v4rfn 0/1 Completed 8 1d
azure-cni-networkmonitor-65xxm 0/1 Error 5 1d
azure-cni-networkmonitor-dwcl5 0/1 Error 7 1d
azure-cni-networkmonitor-m7sqc 0/1 Error 10 1d
azureproxy-7bb5c9d7fb-bk5wh 0/1 Completed 7 1d
heapster-7b6867b589-x7k97 0/2 Error 6 1d
kube-dns-v20-55645bfd65-96gb8 0/3 Error 3 10h
kube-dns-v20-55645bfd65-qwj29 0/3 Error 12 1d
kube-proxy-m5bqh 0/1 Error 10 1d
kube-proxy-q5hch 0/1 Error 7 1d
kube-proxy-rd8qg 0/1 Error 5 1d
kube-svc-redirect-4kjlp 0/1 CrashLoopBackOff 8 1d
kube-svc-redirect-52vmh 0/1 Error 4 1d
kube-svc-redirect-psc95 0/1 Error 8 1d
kubernetes-dashboard-844cf88ddc-f6s8j 0/1 Error 6 10h
metrics-server-64f6d6b47-4jm24 0/1 ContainerCreating 0 10h
omsagent-8zptm 0/1 Error 3 1d
omsagent-rs-675945f67d-ghdps 0/1 RunContainerError 5 1d
omsagent-z9m55 0/1 Error 5 1d
omsagent-ztzll 0/1 Error 5 1d
tiller-deploy-f9b8476d-tsh88 0/1 Error 2 1d
tunnelfront-78d4bcd765-psq4j 0/1 Error 3 1d
zooming-owl-nginx-ingress-controller-58b9fc4854-bffs2 0/1 Error 0 10h
zooming-owl-nginx-ingress-default-backend-585b4794db-2ltrj 0/1 ContainerCreating 0 9m
You can find bellow an example of the description of a kube-system pod:
root@frparcaptainpil:~/azure# kubectl -n kube-system describe pod kube-proxy-q5hch
Name: kube-proxy-q5hch
Namespace: kube-system
Node: aks-agentpool-34372919-2/172.16.35.194
Start Time: Wed, 22 Aug 2018 12:16:59 +0200
Labels: component=kube-proxy
controller-revision-hash=1770405012
pod-template-generation=1
tier=node
Annotations:
Status: Running
IP: 172.16.35.194
Controlled By: DaemonSet/kube-proxy
Containers:
kube-proxy:
Container ID: docker://c6f836dc1f6ae22396aaa8e9bb76de12d368915ab5f2fa748128c3c3902adf57
Image: k8s.gcr.io/hyperkube-amd64:v1.10.6
Image ID: docker-pullable://k8s.gcr.io/hyperkube-amd64@sha256:0eb0eed93c81feb6b5694385537a249fd8e271123ba344e853c1eb8b60cd7c85
Port:
Host Port:
Command:
/hyperkube
proxy
--kubeconfig=/var/lib/kubelet/kubeconfig
--cluster-cidr=172.16.35.128/25
--feature-gates=ExperimentalCriticalPodAnnotation=true
State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 23 Aug 2018 14:20:56 +0200
Finished: Thu, 23 Aug 2018 14:23:41 +0200
Ready: False
Restart Count: 7
Requests:
cpu: 100m
Environment:
Mounts:
/etc/kubernetes/certs from certificates (ro)
/var/lib/kubelet from kubeconfig (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dsz28 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet
HostPathType:
certificates:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/certs
HostPathType:
default-token-dsz28:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dsz28
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: node-role.kubernetes.io/master=true:NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreatePodContainer 2m (x671 over 2h) kubelet, aks-agentpool-34372919-2 unable to ensure pod container exists: failed to create container for /kubepods/burstable/pod811f3110-a5f4-
11e8-9151-5a18d5725d34 : mountpoint for devices not found
When I restart the nodes the kube system pods are temporary ready and failed after. I just start with Kubernetes and I don't find any solution at this problem.
I specify that we have security rules on our Virtual Network but the nodes can access to Internet (they pull their images)
Regards
Problem solved, the issue was elsewhere. A batch working during the night mess up with the VM and kubernetes was not able to find his mountpoint (/var/run/secrets/kubernetes.io/serviceaccount)