I have a program which has multiple independent1 components.
It is trivial to add a liveness probe in all of the components, however it's not easy to have a single liveness probe which would determine the health of all of the program's components.
How can I make kubernetes look at multiple liveness probes and restart the container when any of those are defunct?
I know it can be achieved by adding more software, for example an additional bash script which does the liveness checks, but I am looking for a native way to do this.
1By independent I mean that failure of one component does not make the other components fail.
It doesn't do that. The model is pretty simple, one probe per container, follow the restart policy on failure.
Understood about the designed-for-containers problem with legacy apps, but there really are a lot of ways to arrange for resources to be shared for legacy compatibility. If the components of this system are already different processes, then there should be a way to partition them into containers.
If the components are threads or some other intra-application modularization technique, then the liveness determination really has to come from inside the app.
The Kubernetes API allows one liveness and one readness per application (Deployment / POD). I recommend creating a validations centralizing service that has an endpoint rest:
livenessProbe:
httpGet:
path: /monitoring/alive
port: 3401
httpHeaders:
- name: X-Custom-Header
value: Awesome
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 15
or try one bash to same task, like:
livenessProbe:
exec:
command:
- bin/bash
- -c
- ./liveness.sh
initialDelaySeconds: 220
timeoutSeconds: 5
liveness.sh
#!/bin/sh
if [ $(ps -ef | grep java | wc -l) -ge 1 ]; then
echo 0
else
echo "Nothing happens!" 1>&2
exit 1
fi
Recalling what the handling of messages can see in the events the failure in question: "Warning Unhealthy Pod Liveness probe failed: Nothing happens!"
Hope this helps