Kubernetes' pods' status is CrashLoopBackOff but no logs are showing up

8/18/2021

I am a beginner learning about Kubernetes. I tried pulling an unofficial image from a private registry for zookeeper in my yaml file for testing but the pod status was ImagePullBackOff. Somehow I got that error rectified and the image was pulled successfully but the new error being reflected for pod status is CrashLoopBackOff. Upon using the command "kubectl logs -f -p zookeeper-n1-pod-0 -c zookeeper-n1 -n test-1" or using "kubectl logs podname" command in any way or form in putty terminal, there isn't any output, the cursor just moves to the next line. I tried "exit $?" command to see the exit status of my previous command and got the output as 0 which means that the last command was executed successfully yet I see the pod status as CrashLoopBackOff. I am not able to solve this issue as no logs are present. What is the probable cause and solution for this?

Thanks in advance!!

-- Dolly
containers
kubernetes

2 Answers

8/26/2021

I had missed changing the zoo_sample.cfg configuration file's name to zoo.cfg in my Dockerfile commands which led to failure in Zookeeper server launch and led to ImagePullBackOff error, CrashLoopBackOff error, and no logs showing up. It is a compulsory step that must be followed because zkServer.sh looks for zoo.cfg on startup.

-- Dolly
Source: StackOverflow

8/18/2021

CrashLoopBackOff tells that a pod crashes right after the start. Kubernetes tries to start pod again, but again pod crashes and this goes in loop.

kubectl logs [podname] -p

the -p option will read the logs of the previous (crashed) instance.

Next, you can check "state reason","last state reason" and "Events" Section by describing pod.

kubectl describe pod -n

I would recommend you to check this blog Debugging CrashLoopBackOff

-- Chandra Sekar
Source: StackOverflow