Kubernetes , liveness probe is failing but pod in Running state

1/4/2019

I'm trying to do a blue green deployment with kubernetes , I have followed it , https://www.ianlewis.org/en/bluegreen-deployments-kubernetes , that is ok. I have added a liveness probe to execute a healthcheck ,

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: flask-1.3
spec:
  replicas: 2
  template:
    metadata:
      labels:
        name: app
        version: "1.3"
    spec:
      containers: 
        - name: appflask
          image: 192.168.99.100:5000/fapp:1.2
          livenessProbe:
            httpGet:
             path: /index2
             port: 5000
            failureThreshold: 1
            periodSeconds: 1
            initialDelaySeconds: 1
          ports:
            - name: http
              containerPort: 5000

the path "index2" doesnt exist , I want to test a failed deployment. the problem is when I execute:

 kubectl get pods -o wide

for some seconds one of the pods is in state "RUNNING"

NAME                         READY   STATUS             RESTARTS   AGE     IP             NODE    NOMINATED NODE   READINESS GATES

flask-1.3-6c644b8648-878qz   0/1     CrashLoopBackOff   6          6m19s   10.244.1.250   node    <none>           <none>
flask-1.3-6c644b8648-t6qhv   0/1     CrashLoopBackOff   7          6m19s   10.244.2.230   nod2e   <none>           <none>

after some seconds one pod is RUNNING when live is failing always:

NAME                         READY   STATUS             RESTARTS   AGE 

    IP             NODE    NOMINATED NODE   READINESS GATES

   flask-1.3-6c644b8648-878qz   1/1     Running            7          6m20s   10.244.1.250   node    <none>           <none>
    flask-1.3-6c644b8648-t6qhv   0/1     CrashLoopBackOff   7          6m20s   10.244.2.230   nod2e   <none>           <none>

And after RUNNING it back to CrashLoopBackOff, the question is , why for some seconds it keeps RUNNING if the livenesprobe go to fail always?

thanks in advance

-- Esteban Ziths
kubernetes

2 Answers

1/4/2019

What's happening to you is this:

When you first start the pod (or the container), it will start and will get into the "running" state. Now, if there are no processes running in the container, or if there is a non-continuous process (say sleep 100), when this process finishes, kubernetes is going to consider this pod completed.

Now, since you have a deployment, that is going to keep a certain amount of replicas running, it re-creates the pod. But again, there are no processes running, so again, it get into completed. This is an infinite loop.

If you want to keep the pod up and running, even though you have no processes inside running, you can pass the parameter tty: true, in your yaml file.

apiVersion: v1
kind: Pod
metadata:
  name: debian
  labels:
    app: debian
spec:
  containers:
  - name: debian
    image: debian
    tty: true       # this line will keep the terminal open

If you run the pod above without tty: true, same this is going to happen.

-- suren
Source: StackOverflow

1/4/2019

You should be looking at Readiness probe instead, or both of them.

Readiness and liveness probes can be used in parallel for the same container. Using both can ensure that traffic does not reach a container that is not ready for it, and that containers are restarted when they fail.

Liveness probe checks if your application is in a healthy state in your already running pod.

Readiness probe will actually check if your pod is ready to receive traffic. Thus, if there is no /index2 endpoint, it will never appear as Running

-- edbighead
Source: StackOverflow