Assuming the deployment of a Java microservice (Dropwizard) via Docker and Kubernetes.
An example microservice starts and runs flawlessly with 192Mi
HeapSize. This is determined as the basic memory requirement.
An example Dockerfile, using openjdk-runtime:11-hotspot
FROM some-container-reg/openjdk-runtime:11-hotspot
# prepare build
COPY --chown=1001:1001 build/install/svc-example .
COPY --chown=1001:1001 config.yml .
# exposing port(s)
EXPOSE 8080
# setting entry point...
ENTRYPOINT ["bin/svc-example", "server", "config.yml"]
Note: Since using openjdk > 10
the JVM detects if it's running in a container. Therefore there is no need to enable experimental VM features to achieve this like back in Java 8. For Details, please see the link below.
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8146115
Corresponding k8s Deployment
resources:
limits:
cpu: 250m
memory: 288Mi
requests:
cpu: 100m
memory: 192Mi
env:
- name: JAVA_OPTS
value: "-XX:MaxRAMPercentage=75.0"
The container request equals the working amount from the java application itself. The limits have been defined as the request plus an offset of 96Mi
. By doing this I want to ensure that the requirements besides the JVM-Heap have enough resources to work properly - MetaspaceSize, CodeCache, ...
.
Also defining JAVA_OPTS
within the container deployment by applying -XX:MaxRAMPercentage=75.0
which defines a limit of 75% of total memory for the JVM. Why not defining explicit heap limitations within -XX:Xmx=192Mi
? For details, please see the link below.
So, now to my questions:
GC
or MaxMetaspaceSize
to prevent memory usage above the limits?You need to profile your app using profilers and monitor it using APM tools to fine tune memory requirements as and when its needed and its a continuous activity. Regarding the last question whatever JRE/JDK the docker image have is the JDK/JRE used to create the container for the pod in kubernetes. Also I noticed you are using a JDK which is not recommend for production deployment...rather use a JRE.
The thing to remember from Kubernetes quota perspective is that at any point if the memory consumption by the java app goes beyond the value defined in the limits the pod will be terminated.
And pod will be not be scheduled at all if there is no node which has that much free memory defined in the requests.