Why kubernetes reports "readiness probe failed" along with "liveness probe failed"

10/7/2019

I have a working Kubernetes deployment of my application.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  ...
  template:
    ...
    spec:
      containers:
      - name: my-app
        image: my-image
        ...
        readinessProbe:
          httpGet:
            port: 3000
            path: /
        livenessProbe:
          httpGet:
            port: 3000
            path: /

When I apply my deployment I can see it runs correctly and the application responds to my requests.

$ kubectl describe pod -l app=my-app

...
Events:
  Type    Reason     Age   From                                  Message
  ----    ------     ----  ----                                  -------
  Normal  Scheduled  4m7s  default-scheduler                     Successfully assigned XXX
  Normal  Pulled     4m5s  kubelet, pool-standard-4gb-2cpu-b9vc  Container image "my-app" already present on machine
  Normal  Created    4m5s  kubelet, pool-standard-4gb-2cpu-b9vc  Created container my-app
  Normal  Started    4m5s  kubelet, pool-standard-4gb-2cpu-b9vc  Started container my-app

The application has a defect and crashes under certain circumstances. I "invoke" such a condition and then I see the following in pod events:

$ kubectl describe pod -l app=my-app

...
Events:
  Type     Reason     Age               From                                  Message
  ----     ------     ----              ----                                  -------
  Normal   Scheduled  6m45s             default-scheduler                     Successfully assigned XXX
  Normal   Pulled     6m43s             kubelet, pool-standard-4gb-2cpu-b9vc  Container image "my-app" already present on machine
  Normal   Created    6m43s             kubelet, pool-standard-4gb-2cpu-b9vc  Created container my-app
  Normal   Started    6m43s             kubelet, pool-standard-4gb-2cpu-b9vc  Started container my-app
  Warning  Unhealthy  9s                kubelet, pool-standard-4gb-2cpu-b9vc  Readiness probe failed: Get http://10.244.2.14:3000/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  4s (x3 over 14s)  kubelet, pool-standard-4gb-2cpu-b9vc  Liveness probe failed: Get http://10.244.2.14:3000/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  Normal   Killing    4s                kubelet, pool-standard-4gb-2cpu-b9vc  Container crawler failed liveness probe, will be restarted

It is expected the liveness probe fails and the container is restarted. But why do I see Readiness probe failed event?

-- Maksim Sorokin
kubernetes
kubernetes-deployment
readinessprobe

4 Answers

10/7/2019

please provide an implementation function/method at backend, you can make /health named uri, and can write a liveness logic here and readiness can be your choice too.

/health uri, shall be associated with a function implementation which will can return 200 status code if everything goes fine, else it can be made to get failed

-- Tushar Mahajan
Source: StackOverflow

10/7/2019

You configured the same check for readiness and liveness probe - therefore if the liveness check fails, it can be assumed that the readiness fails as well.

-- Thomas
Source: StackOverflow

10/7/2019

The readiness probe is used to determine if the container is ready to serve requests. Your container can be running but not passing the probe. If it doesn't pass the check no service will redirect to this container.

By default the period of the readiness probe is 10 seconds.

You can read more here : https://docs.openshift.com/container-platform/3.9/dev_guide/application_health.html

-- Paul
Source: StackOverflow

10/8/2019

As @suren wrote in the comment, readiness probe is still executed after container is started. Thus if both liveness and readiness probes are defined (and also fx they are the same), both readiness and liveness probe can fail.

Here is a similar question with a clear in-depth answer.

-- Maksim Sorokin
Source: StackOverflow