I'm trying to deploy an app that contains a health check endpoint. If it fails, the pod should be destroyed, but the K8s keeps the pod with Running status.
Config:
readinessProbe:
httpGet:
path: /healthcheck
port: 3001
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
Pod:
docs-app-768b47bc69-lrlcf 0/1 Running 0 1m
So, there is a way to destroy the pod when the readiness probe fails?
Pair it with Liveness to make it more effective and specify the resource limit.
resources:
limits:
cpu: 300m
memory: 200Mi
requests:
cpu: 300m
memory: 200Mi
readinessProbe:
httpGet:
path: /api/health
port: 80
initialDelaySeconds: 15
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
livenessProbe:
httpGet:
path: /api/health
port: 80
initialDelaySeconds: 25
periodSeconds: 25
successThreshold: 1
failureThreshold: 3
Readiness probes are for service readiness (if it's passing you are in your load balancer if it isn't you aren't) it is useful for cutting off traffic to an overloaded pod and letting it flush it's backpreassure.
Liveness probes are for killing pods that are unhealthy with no hope for recovery.
The documentation is very clear: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/
This is not what probes are meant for. LivenessProbe, in case of failure, after retrying configured number of times will "restart" the pod. Instead the ReadinessProbe is useful to indicate that the pod should not serve traffic during failure. Probes are not meant to run or abort the pod, if you need that you have to write a plugin to monitor pod status and delete the replica or deployment in case of failure.