Kubernetes livenessProbe: restarting vs destroying of the pod

9/13/2018

Is there a way to tell Kubernetes to just destroy a pod and create a new one if the liveness probe fails? What I see from logs now: my node js application is just restarted and runs in the same pod.

The liveness probe is defined in my YAML specification as follows:

livenessProbe:
 httpGet:
 path: /app/check/status
 port: 3000
 httpHeaders:
 - name: Accept
   value: application/x-www-form-urlencoded
 initialDelaySeconds: 60
 periodSeconds: 60

Disclaimer:

I am fully aware that recreating a pod if a liveness prove fails is probably not the best idea and a right way would be to get a notification that something is going on.

-- Nick
kubectl
kubernetes

1 Answer

9/13/2018

So liveness and readiness probes are defined in containers not pods so if you have 1 container in your pod and you specify restartPolicy to Never. Then your pod will go into a Failed state and will be scrapped at some point based on the terminated-pod-gc-threshold value.

If you have more than one container in your pod it becomes tricker because of your other container(s) running making the pod still be in Running status. You can build your own automation or try Pod Readiness which is still in alpha as of this writing.

-- Rico
Source: StackOverflow