I have the following liveness probe in my service deployment.yaml
livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 9081
scheme: HTTP
initialDelaySeconds: 180
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 1
I want to test that the probe is actually triggering a POD redeployment, which is the easiest thing to do to make it fail? Possibly in a programmatic way.
Better to clarify the question, I don't want to change the code in the application, neither pausing the container that is running. I was wondering if it's possible to block someway the endpoint/port at runtime maybe using a kubernetes or docker command.
If you can get to the host where the pod is running, doing a docker pause
on the container will pause all the processes in the container, which should fail the liveness probes.
Note: I have not tried this myself but based on the documentation of docker pause
here, it sounds like that.
If you have the ability to change the underlying applications code, simply change the /health
endpoint to make it return something higher than a 400 http status code.
If not, you'll have to make your application fail somehow, probably by logging into the pod using kubectl exec
and making changes that affect the application's health.
This is entirely dependent on your application, and kubernetes will simply do what you tell it to.
You could define your liveness probe as follows
livenessProbe:
exec:
command:
- /bin/bash
- '-c'
- /liveness-probe.sh
initialDelaySeconds: 10
periodSeconds: 60
And create an sh file in your root path named
liveness-probe.sh
that contains
#!/bin/bash
#exit 0 #Does not fail and does not trigger a pod restart
exit 1 #Triggers pod restart