I have a dockerized Java Application running in a Kubernetes cluster. Till now I have had configured a CPU limit of 1.5 cores. Now I increased the available CPUs to 3 to make my app perform better.
Unfortunately it needs significantly more Memory now and gets OOMKilled by Kubernetes. This graph shows a direct comparison of the overall memory consumption of the container for 1.5 cores (green) and 3 cores (yellow) (nothing different but the CPU limits):
The Java Heap is always looking good and seems not to be a problem. The memory consumption is in the native memory.
My application is implemented with Spring Boot 1.5.15.RELEASE, Hibernate 5.2.17.FINAL, Flyway, Tomcat. I compile with Java 8 and start it with a Docker OpenJDK 10 container.
I debugged a lot the last days using JProfiler and jmealloc as described in this post about native memory leak detection. JMEalloc told me about a large amount of Java.util.zipInflater.
Has anyone any clue what could explain the (for me very irrational) coupling of available CPUs to native memory consumption? Any hints would be appreciated :-)!
Thanks & Regards
Matthias