I tried to to it using this code snipped but it does not work out:
@ResponseBody
@GetMapping(FAIL)
public Response triggerError(){
i = i+1;
if(i==3){
i=0;
return Response.serverError().entity("Triggered 500").build();
}
return Response.ok().entity("I am fine").build();
}
How can I trigger an unhealthy status for a kubernetes pod?
According to the documentation, if a Pod is unhealthy, the container in the pod will be restarted (or not), accordingly to the restart policy.
By default, a Pod is considered unhealthy if one of the containers in the Pod exits with the error status.
If the Pod is constantly restarting, its status is shown as CrashLoopBackOff
.
If a container in a Pod exits with 0 it gets status Completed
and no more restarts will happen.
You can customize a Pod health check using the liveliness probe syntax:
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
Explanation:
For the first 30 seconds of the Container’s life, there is a /tmp/healthy file. So during the first 30 seconds, the command cat /tmp/healthy returns a success code. After 30 seconds, cat /tmp/healthy returns a failure code.
I hope it would be helpful.