I'm running a spring boot inside a pod with the below configurations.
Pod limits:
resources:
limits:
cpu: "1"
memory: 2500Mi
requests:
cpu: "1"
memory: 2500Mi
command args:
spec:
containers:
- args:
- -c
- ln -sf /dev/stdout /var/log/access.log;java -Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -Djava.security.egd=file:/dev/./urandom
-Xms1600m -Xmx1600m -XX:NewSize=420m -XX............
The -Xmx flag only controls Java heap memory, which is space available for your own Java objects when running your code. If you run out, JVM will do garbage collection to make space. If you still ran out, you get an OutOfMemoryException thrown.
Java JVM also uses a bunch of other memory internally for stuff like loading classes, JIT compilation etc... Therefore you need to allow more memory in Kubernetes than just -Xmx value. If you exceed the Kubernetes limit value, then probably Java will crash.
The config you posted above looks fine. Normally I come to find these values by looking at the Kubernetes memory usage graph after running for some time without limits.
If running a JVM in docker, rather than set the -Xmx
, -Xms
option manually, it's better to use -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap to tell JVM to respect the container's limit.
see https://hub.docker.com/_/openjdk/ Make JVM respect CPU and RAM limits for more information.
If the java process has reached its max heap limit, then the jvm will throw outofmemory error if it cannot regain heap memory after GC. In this case, the pods will be running but the java process is in error.
Xmx has effect on the jvm heap inside the POD and the jvm heap cannot exceed that. In your case the jvm heap can likely be increased, since the memory to the pod is comparatively high.