StatefulSet recreates pod, why?

1/9/2020

I have my deployment, where I have defined postgres statefulSet, however I have it without PVC so if pod is dead - all data is gone. If I will list all pods I see below picture:

pod1 - Running - 10 min
pod2 - Running - 10 min
postgresPod - Running - 10 min

After some time I list pods again and see below:

pod1 - Running - 10 min
pod2 - Running - 10 min
postgresPod - Running - 5 min

As you can see postgresPod running 5 min. I "described" statefulset and see there below:

Type     Reason               Age                From                    Message
  ----     ------               ----               ----                    -------
  Normal   SuccessfulCreate     5m **(x2 over 10m)**  statefulset-controller  create Pod postgresPod in StatefulSet x-postgres successful
  Warning  RecreatingFailedPod  5m                statefulset-controller  StatefulSet xx/x-postgres is recreating failed Pod postgresPod
  Normal   SuccessfulDelete     5m                statefulset-controller  **delete Pod postgresPod** in StatefulSet x-postgres successful

So my question is how I can know why statefulSet recreates the pods? Is there any additional log? Might be it is somehow related to resources of the machines, or pod was created on another node that has more resources on that specific moment?

Any Ideas?

-- liotur
deployment
devops
kubernetes
kubernetes-statefulset

1 Answer

1/9/2020

You should look into two things:

  1. Debug Pods

Check the current state of the pod and recent events with the following command:

kubectl describe pods ${POD_NAME} Look at the state of the containers in the pod. Are they all Running? Have there been recent restarts?

Continue debugging depending on the state of the pods.

Especially take a closer look at why the Pod crashed.

More info can be found in the links I have provided.

  1. Debug StatefulSets.

StatefulSets provide a debug mechanism to pause all controller operations on Pods using an annotation. Setting the pod.alpha.kubernetes.io/initialized annotation to "false" on any StatefulSet Pod will pause all operations of the StatefulSet. When paused, the StatefulSet will not perform any scaling operations. Once the debug hook is set, you can execute commands within the containers of StatefulSet pods without interference from scaling operations. You can set the annotation to "false" by executing the following:

kubectl annotate pods <pod-name> pod.alpha.kubernetes.io/initialized="false" --overwrite

When the annotation is set to "false", the StatefulSet will not respond to its Pods becoming unhealthy or unavailable. It will not create replacement Pods till the annotation is removed or set to "true" on each StatefulSet Pod.

Please let me know if that helped.

-- OhHiMark
Source: StackOverflow