Cannot find cause for new pod

8/25/2019

I have a kubernetes 1.11 cluster that has been running for weeks. Today I noticed this:

tooluser$ kubectl get po --sort-by=.status.startTime -o custom-columns=NAME:.metadata.name,CREATED:.status.startTime,RESTARTS:.status.containerStatuses[0].restartCount
NAME                           CREATED        RESTARTS
pod1-86b8b985f4-78x4c          <10 min ago>   0
pod2-788dbb86df-wj672          <10 min ago>   0
pod3-76d94f5d94-gspqg          <10 min ago>   0
pod4-demo-56cb4bfc68-m2b52     <10 min ago>   0
pod5-69cc97c4c-29dnk           <10 min ago>   0

ie looks like the pods have been running for first time (0 restarts), starting 10 min ago, yet these pods have been running for weeks. Further,

  • No events in pod.
  • I checked the replica sets, no new rs for a week, no rs events;
  • same for deployments, and no events in corresponding deployments;
  • and same for nodes, no events on nodes;
  • no general events (kubectl get events)

Ie I cannot find the reason why these pods that have been running for a week have been started completely fresh. Are there other kubectl commands that I could do to find out?

-- Oliver
kubernetes
kubernetes-pod

2 Answers

8/25/2019

If you implemented HPA or resource limit, new pods could be scheduled. For resource limit, if memory limit has been breached kubernetes kills the pod and if it has RS creates new one. You can check it with:

kubectl get events

Or checking logs of kube-scheduler with:

kubectl logs $kube-scheduler-pod-name -n kube-system
-- Akın Özer
Source: StackOverflow

8/26/2019

The intent was for the following:

  1. Kubelet observes a pod is binded to its host.
  2. Kubelet records the pod.Status.StartTime
  3. Kubelet does a docker pull for container(s) image
  4. Kubelet starts the pod

StartTime is used mainly with ActiveDeadlineSeconds to determine how long a container/pod has been running, and kubelet is responsible for terminating the pod/container once the deadline has been reached. Given that image pulling may take a long time, the active deadline could be inaccurate.

Check logs of specific pods:

$ kubectl logs pod_name -n your_namespace -c container_name

Then you can check container:

$ kubectl exec -it pod_name /bin/bash 

At the same time check status of kubelet.

To get information from logs like reason of starting or failure I suggest you to specify terminationMessagePath field of a Container in your pod's definition file. Default value is /dev/termination-log.

Kubernetes use the contents from the specified file to populate the Container’s status message on both success and failure. Here is documentation pod-termination-message.

More information about pod lifecycle you can find here: pod-lifecycle.

-- MaggieO
Source: StackOverflow