Getting CrashBackloopError when deploying a pod

10/24/2021

I am new to kubernetes and am trying to deploy a pod with private registry. Whenever I deploy this yaml it goes crash loop. Added sleep with a large value thinking that might cause this, still haven't worked.

apiVersion: v1
kind: Pod
metadata:
  name: privetae-image-testing
spec:
  containers:
    - name: private-image-test
      image: buildforjenkin.azurecr.io/nginx:latest
      imagePullPolicy: IfNotPresent
      command: ['echo','success','sleep 1000000']

Here are the logs:

Name:         privetae-image-testing
Namespace:    default
Priority:     0
Node:         docker-desktop/192.168.65.4
Start Time:   Sun, 24 Oct 2021 15:52:25 +0530
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.1.1.49
IPs:
  IP:  10.1.1.49
Containers:
  private-image-test:
    Container ID:  docker://46520936762f17b70d1ec92a121269e90aef2549390a14184e6c838e1e6bafec
    Image:         buildforjenkin.azurecr.io/nginx:latest
    Image ID:      docker-pullable://buildforjenkin.azurecr.io/nginx@sha256:7250923ba3543110040462388756ef099331822c6172a050b12c7a38361ea46f
    Port:          <none>
    Host Port:     <none>
    Command:
      echo
      success
      sleep 1000000
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 24 Oct 2021 15:52:42 +0530
      Finished:     Sun, 24 Oct 2021 15:52:42 +0530
    Ready:          False
    Restart Count:  2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ld6zz (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-ld6zz:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  34s                default-scheduler  Successfully assigned default/privetae-image-testing to docker-desktop
  Normal   Pulled     17s (x3 over 33s)  kubelet            Container image "buildforjenkin.azurecr.io/nginx:latest" already present on machine
  Normal   Created    17s (x3 over 33s)  kubelet            Created container private-image-test
  Normal   Started    17s (x3 over 33s)  kubelet            Started container private-image-test
  Warning  BackOff    2s (x5 over 31s)   kubelet            Back-off restarting failed container

I am running the cluster on docker-desktop on windows. TIA

-- ChaitanyaSai
kubernetes
kubernetes-pod

2 Answers

10/24/2021

Notice you are using standard nginx image? Try delete your pod and re-apply with:

apiVersion: v1
kind: Pod
metadata:
  name: private-image-testing
  labels:
    run: my-nginx
spec:
  restartPolicy: Always
  containers:
  - name: private-image-test
    image: buildforjenkin.azurecr.io/nginx:latest
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 80
      name: http

If your pod runs, you should be able to remote into with kubectl exec -it private-image-testing -- sh, follow by wget -O- localhost should print you a welcome message. If it still fail, paste the output of kubectl logs -f -l run=my-nginx to your question.

-- gohm&#39;c
Source: StackOverflow

11/2/2021

Check my previous answer to understand step-by step whats going on after you launch the container.

You are launching some nginx:latest container with the process inside that runs forever as it should be to avoid main process be exited. Then you add overlay that (I will quote David: print the words success and sleep 1000000, and having printed those words, then exit).

Instead of making your container running all the time to serve, you explicitly shooting into your leg by finishing the process using sleep 1000000.

And sure, your command will be executed and container will exit. Check that. It was exited correctly with status 0 and did that 2 times. And will more in the future.

  Reason:       CrashLoopBackOff
  Last State:   Terminated
  Reason:       Completed
  Exit Code:    0
  Started:      Sun, 24 Oct 2021 15:52:42 +0530
  Finished:     Sun, 24 Oct 2021 15:52:42 +0530

You need to think well if you really need command: ['echo','success','sleep 1000000']

-- Vit
Source: StackOverflow