Memory discrepancies between JVM and k8s pod stats

9/7/2021

I’m trying to understand how Java manage memory in kubrenetes environment. I have 200 Mb mismatch between what k8s reports (via os stats) and what JVM reports (via jconsole). This 200 Mb difference was measured strait after manual GC. Without this manual GC run, it could go up to 350 Mb after some time. So my question is where this 200Mb go? It’s sounds too much for container OS IMHO.

Some data:

JVM flags

-Djava.net.preferIPv4Stack=true
-Djava.rmi.server.hostname=localhost
-XX:+UseG1GC
-XX:+UseContainerSupport
-XX:InitialRAMPercentage=50.0
-XX:MaxRAMPercentage=50.0
-XX:MinRAMPercentage=50.0 

Memory stats (just after manual GC):

daemon@POD_NAME:/sys/fs/cgroup/memory$ cat /sys/fs/cgroup/memory/memory.usage_in_bytes
385085440 B = 367 Mb

JMX:

78 Mb Commuted
23 Mb Heap
136 Mb non heap
Sum 23 + 136 = 159
Difference to system report 367 - 159 = 208

Running processes on k8s pod (I don’t have PS there so I used find):

find /proc -mindepth 2 -maxdepth 2 -name exe -exec ls -lh {} \; 2>/dev/null 
lrwxrwxrwx 1 daemon daemon 0 Sep  7 12:20 /proc/1/exe -> /usr/local/openjdk-8/bin/java
lrwxrwxrwx 1 daemon daemon 0 Sep  7 19:40 /proc/5007/exe -> /bin/bash
lrwxrwxrwx 1 daemon daemon 0 Sep  7 19:53 /proc/5114/exe -> /usr/bin/find

Heap memory Heap memory 24386736 b = 23 Mb

Non Heap memory Non Heap memory 136543336 b = 130 Mb

VM Summary VM Summary

-- Matzz
java
jvm
kubernetes
memory-management

0 Answers