Crashloopbackoff status after executing the go program using pod.yaml-why?

8/23/2021

I have applied constraints in minikube. I have build a go program as image which will be executed as pod by applying pod.yaml file. When i get the status of pod using "kubectl get pods", after few seconds it is showing "crashloopbackoff" as status. Then it shows as warning "Back-off restarting failed container". Why the pod is not running permanently successful without showing crashoopbackoff error or any restart warning status.

pod.yaml

apiVersion: v1
kind: Pod
metadata:
 name: opa
 labels:
name: opa
namespace: test
owner: name.agilebank.demo
spec:
containers:
  - name: opa
    image: user-name/image-name
resources:
  limits:
    memory: "1Gi"
    cpu: "200m"
ports:
  - containerPort: 8000

     
    `kubectl get pods`
     NAME   READY   STATUS             RESTARTS   AGE
     opa    0/1     CrashLoopBackOff   12         41m


     `kubectl describe pod pod-name`
      Name:         opa
      Namespace:    default
      Priority:     0
      Node:         minikube/ip
      Start Time:   Mon, 23 Aug 2021 19:31:52 +0530
      Labels:       name=opa
          namespace=test
          owner=name.agilebank.demo
      Annotations:  <none>
      Status:       Running
      IP:           ip-no
      IPs:
        IP:  ip-no
        Containers:
      opa:
        Container ID:   docker://no
        Image:          username/img-name
        Image ID:       docker-pullable://username/img-name
        Port:           8000/TCP
        Host Port:      0/TCP
        State:          Waiting
        Reason:       CrashLoopBackOff
        Last State:     Terminated
        Reason:       Completed
        Exit Code:    0
        Started:      Mon, 23 Aug 2021 20:13:02 +0530
        Finished:     Mon, 23 Aug 2021 20:13:05 +0530
        Ready:          False
        Restart Count:  12
        Limits:
          cpu:     200m
          memory:  1Gi
        Requests:
          cpu:        200m
          memory:     1Gi
          Environment:  <none>
          Mounts:
            /var/run/secrets/kubernetes.io/serviceaccount from            default-token-5zjvn (ro)
          Conditions:
          Type              Status
          Initialized       True 
          Ready             False 
          ContainersReady   False 
          PodScheduled      True 
          Volumes:
             default-token-5zjvn:
          Type:        Secret (a volume populated by a Secret)
          SecretName:  default-token-5zjvn
          Optional:    false
          QoS Class:       Guaranteed
          Node-Selectors:  <none>
          Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
          node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
          Events:
          Type     Reason     Age                  From               Message
           ----     ------     ----                 ----               -------
          Normal   Scheduled  45m                  default-scheduler  Successfully assigned default/opa to minikube
          Normal   Pulling    45m                  kubelet            Pulling image "usernaame/img-name"
          Normal   Pulled     41m                  kubelet            Successfully pulled image "username/img-name"
          Normal   Created    39m (x5 over 41m)    kubelet            Created container opa
          Normal   Started    39m (x5 over 41m)    kubelet            Started container opa
          Normal   Pulled     30m (x7 over 41m)    kubelet            Container image "username/img-name" already present on machine
         Warning  BackOff    19s (x185 over 41m)  kubelet            Back-off restarting failed container
-- thara
crashloopbackoff
go
kubernetes
kubernetes-pod

1 Answer

8/23/2021

There is something wrong with your application. You app exits with Exit Code: 0

Probably it executing what you told to execute and finishing work, if you want to keep your container alive your application should be running inside of that container.

This is not probe error. With probe error you could expect event similar to this:

  Warning  Unhealthy  13s (x4 over 43s)  kubelet            Liveness probe failed: HTTP probe failed with statuscode: 404

Is your appication in the container running all the time or what? If you want to execute it and it finishes you should not use Pod. You should use Job.

-- Daniel Hornik
Source: StackOverflow