Kubernetes liveness probe logging recovery

1/21/2020

I am trying to test a liveness probe while learning kubernetes. I have set up a minikube and configured a pod with a liveness probe.

Testing the script (E.g. via docker exec) it seems to report success and failure as required.

The probe leads to failures events which I can view via kubectl describe podname but it does not report recovery from failures.

This answer says that liveness probe successes are not reported by default.

I have been trying to increase the log level with no success by running variations like:

minikube start --extra-config=apiserver.v=4
minikube start --extra-config=kubectl.v=4
minikube start --v=4

As suggested here & here.

What is the proper way to configure the logging level for a kubelet?

can it be modified without restarting the pod or minikube?

An event will be reported if a failure causes the pod to be restarted. For kubernetes itself I understand that using it to decide whether to restart the pod is sufficient.

Why aren't events recorded for recovery from a failure which does not require a restart? This is how I would expect probes to work in a health monitoring system.

How would recovery be visible if the same probe was used in prometheus or similar? For an expensive probe I would not want it to be run multiple times. (granted one probe could cache the output to a file making the second probe cheaper)

-- Bruce Adams
kubernetes

1 Answer

1/22/2020

I have been trying to increase the log level with no success by running variations like:

minikube start --extra-config=apiserver.v=4
minikube start --extra-config=kubectl.v=4
minikube start --v=4

@Bruce, none of the options mentioned by you will work as they are releted with other components of Kubernetes cluster and in the answer you referred to it was clearly said:

The output of successful probes isn't recorded anywhere unless your Kubelet has a log level of at least --v=4, in which case it'll be in the Kubelet's logs.

So you need to set -v=4 specifically for kubelet. In the official docs you can see that it can be started with specific flags including the one, changing default verbosity level of it's logs:

-v, --v Level
number for the log level verbosity

Kubelet runs as a system service on each node so you can check it's status by simply issuing:

systemctl status kubelet.service

and if you want to see it's logs issue the command:

journalctl -xeu kubelet.service

Try:

minikube start --extra-config=kubelet.v=4

however I'm not 100% sure if Minikube is able to pass this parameter so you'll need to verify it on your own. If it doesn't work you should still be able to add it in kubelet configuration file, specifying parameters with which it is started (don't forget to restart your kubelet.service after submitting the changes, you simply need to run systemctl restart kubelet.service)

Let me know if it helps and don't hesitate to ask additional questions if something is not completely clear.

-- mario
Source: StackOverflow