jvm in kubernetes/docker running out of memory faster than standalone

12/7/2018

We are moving our JDK 1.8v131 JVM servers to Kubernetes/Docker environment.
We have few JVM servers running in stand alone VMs and few running Kubernetes/Docker environment and both types are present in production.
With the same load Kubernetes/Docker JVMs are running out of memory whereas JVMs in VMs are running fine without issues.
We used exact SAME JVM parameters for running in VM & Container.

Any ideas how to fix this issue?

Here are the options:

Environment:
      JAVA_MEM_OPTS: -Xms2048M -Xmx2048M 
                     -XX:MaxPermSize=256M -XX:+ExitOnOutOfMemoryError -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 
                     -XX:+HeapDumpOnOutOfMemoryError 
                     -XX:HeapDumpPath=/heapdumps/${HOSTNAME}_$(date +%Y%m%d_%H_%M_%S).hprof  

      JAVA_GC_OPTS:  -Dnogclogging=true -XX:+PrintGC -XX:+PrintGCDetail

2018-12-07T15:43:21.42043862Z {Heap before GC invocations=2880 (full 625):
2018-12-07T15:43:21.420465613Z PSYoungGen total 435712K, used 249344K
2018-12-07T15:43:21.420469712Z eden space 249344K, 100% used
2018-12-07T15:43:21.420472561Z from space 186368K, 0% used

2018-12-07T15:43:21.420475332Z to space 228352K, 0% used

2018-12-07T15:43:21.420477921Z ParOldGen total 1398272K, used 1397679K
2018-12-07T15:43:21.420480674Z object space 1398272K, 99% used
2018-12-07T15:43:21.420483127Z Metaspace
used 229431K, capacity 249792K, committed 249968K, reserved 1271808K
2018-12-07T15:43:21.420485549Z class space used 24598K, capacity 27501K, committed 27544K, reserved 1048576K
2018-12-07T15:43:22.628605014Z 2018-12-07T15:43:21.420+0000: 124733.208: ] ] 1647023K->1646334K(1833984K), ], 1.2079201 secs] [Times: user=1.98 sys=0.01, real=1.21 secs]
2018-12-07T15:43:22.62868917Z Heap after GC invocations=2880 (full 625):
2018-12-07T15:43:22.628794768Z PSYoungGen total 435712K, used 248654K
2018-12-07T15:43:22.628799885Z eden space 249344K, 99% used
2018-12-07T15:43:22.628803713Z from space 186368K, 0% used

2018-12-07T15:43:22.628807485Z to space 228352K, 0% used

2018-12-07T15:43:22.628811115Z ParOldGen total 1398272K, used 1397679K
2018-12-07T15:43:22.62881498Z object space 1398272K, 99% used
2018-12-07T15:43:22.628818943Z Metaspace
used 229431K, capacity 249792K, committed 249968K, reserved 1271808K
2018-12-07T15:43:22.628827543Z class space used 24598K, capacity 27501K, committed 27544K, reserved 1048576K
2018-12-07T15:43:22.628831766Z }
2018-12-07T15:43:22.632712004Z {Heap before GC invocations=2881 (full 626):
2018-12-07T15:43:22.63273803Z PSYoungGen total 435712K, used 249344K
2018-12-07T15:43:22.632742051Z eden space 249344K, 100% used **
**2018-12-07T15:43:22.63274617Z from space 186368K, 0% used
2018-12-07T15:43:22.632752151Z to space 228352K, 0% used

2018-12-07T15:43:22.632756279Z ParOldGen total 1398272K, used 1397679K
2018-12-07T15:43:22.632760269Z object space 1398272K, 99% used
2018-12-07T15:43:22.632764456Z Metaspace
used 229431K, capacity 249792K, committed 249968K, reserved 1271808K
2018-12-07T15:43:22.632768599Z class space used 24598K, capacity 27501K, committed 27544K, reserved 1048576K
2018-12-07T15:43:23.164683101Z 2018-12-07T15:43:22.632+0000: 124734.420:
SERVER RESTARTS HERE

-- Swamy
docker
garbage-collection
java-8
kubernetes

1 Answer

12/7/2018

Did you set your container memory resouce requests and limits? Jdk 8u131 doesn't know that it is running inside a container. It still sees the host VMs resources. That could be why your JVM inside the container is killed immediately.

There's a good article from redhat back in 2017. https://developers.redhat.com/blog/2017/03/14/java-inside-docker/

-- Bal Chua
Source: StackOverflow