K8S events: restarting container, pods: zzzz?

2/3/2018

k = kubectl. I'm getting these logs

$ k get events -w
...snip
2018-02-03 13:46:06 +0100 CET   2018-02-03 13:46:06 +0100 CET   1         consul-0.150fd18470775752   Pod       spec.containers{consul}   Normal    Started   kubelet, gke-projectid-default-pool-2de02f1c-059w   Started container
2018-02-03 13:46:06 +0100 CET   2018-02-03 13:46:06 +0100 CET   1         consul-0.150fd184668e88a6   Pod       spec.containers{consul}   Normal    Created   kubelet, gke-projectid-default-pool-2de02f1c-059w   Created container
2018-02-03 13:47:35 +0100 CET   2018-02-03 13:47:35 +0100 CET   1         consul-0.150fd1993877443c   Pod                 Warning   FailedMount   kubelet, gke-projectid-default-pool-2de02f1c-059w   Unable to mount volumes for pod "consul-0_staging(1f35ac42-08e0-11e8-850a-42010af001f0)": timeout expired waiting for volumes to attach/mount for pod "staging"/"consul-0". list of unattached/unmounted volumes=[data config tls default-token-93wx3]

Meanwhile, at the same time:

$ k get pods
consul-0                        1/1       Running   0          49m
consul-1                        1/1       Running   0          1h
consul-2                        1/1       Running   0          1h
...snip

What is going on? Why is events telling me it's restarting/starting the container? k logs pods/consul-0 and -1 and -2 don't tell anything about them being restarted.

-- Henrik
google-kubernetes-engine
kubernetes

1 Answer

2/3/2018

The third column of the events output tells you the number of times an event has been seen. In your case, that value is 1. So it's not restarting your container: it's just telling you that at some point in the past, it created and started the container. That's why you can see it's running when you kubectl get pods.

-- Jose Armesto
Source: StackOverflow