In which scenario kubernates pods can stop working?

12/10/2018

API that is written in JAVA spring-boot was deployed in K8s with two pods and Its been 3 weeks that API is successfully running. But, last day it stoped working and produced 503 server unavailable.

K8s admin told us that Pods got recreating multiple time whole the day. Though it is started working after restarting my build from Drone, i want to know

  1. Which scenario can make Pods not working?
  2. Why K8S is recreating Pods again and again?
  3. If it is the memory issue, as i have developed this API in JAVA, doesn't JAVA's garbage collection work here?

Regards, Hearaman.

-- Hearaman
kubernetes
spring-boot

1 Answer

12/10/2018

Which scenario can make Pods not working?

  1. Memory Limits, request Limits, quota in general
  2. You Pod have a QoS ( K8S ) of Burstable meaning that it can be destoyed to let other pods live
  3. Node/ Workers are down or dained to be updated/maintained
  4. You java Heap is causing the app to be destoyed ( generally that's the case )
  5. Liveness Probe Issues

Why K8S is recreating Pods again and again?

  • To make it available again ( You might have a Readiness Issue ) or some volumes issues ( it depends )

If it is the memory issue, as i have developed this API in JAVA, doesn't JAVA's garbage collection work here

  • if you are using java 8, you might want to add some controls over the Heap size when starting app as follow:

    -XX:+PrintFlagsFinal -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap

This is java 8+ feature so you might need to check java documentation

Hope this helps

-- hkhelil
Source: StackOverflow