How to determine the reason of a CrashLoopBackOff error in a Spring Boot application deployed on Kubernetes

8/11/2017

I have a Spring Boot application, deployed on a docker container on Kubernetes. The application works well for some time (hours) but at a certain moment it starts restarting like crazy showing a CrashLoopBackOff error state.

This is the info I get from the dead pod:

Port:       8080/TCP
State:      Waiting
  Reason:       CrashLoopBackOff
Last State:     Terminated
  Reason:       Error
  Exit Code:    137
  Started:      Fri, 11 Aug 2017 10:15:03 +0200
  Finished:     Fri, 11 Aug 2017 10:16:22 +0200
Ready:      False
Restart Count:  7
...
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bhk8f (ro)
    Environment Variables:
      JAVA_OPTS:        -Xms512m -Xmx1792m
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
...
QoS Class:  BestEffort
Tolerations:    <none>
No events.

Is there any way to get more detailed information about the cause of the crashes?

Is 137 error code an out of memory error? I have kept increasing the memory of the Java process from -Xmx768m up to 1792m, but errors keep showing up. Could it be something else?

One weird fact: I need to find out how come the application runs well, after some hours the pod is killed and then every restart is killed after only some seconds executing.

-- codependent
docker
kubernetes
spring-boot

1 Answer

8/11/2017

kubectl logs podName containerName will provide you with the container logs which should give you additional information about the cause of the error.

-- Chris Stryczynski
Source: StackOverflow