I have a JVM applicaiton running in kubernetes. When I run kubectl top pod
I can see the following
mypod1 12m 6035Mi
mypod2 11m 6129Mi
mypod3 11m 6334Mi
I would like to find out whether that 6GB memory is good or bad. My kubernetes deployment yaml does not specify any resources
Questions
Question #1: "How can I find out the maximum number that it can get to?" A: Without resources
configured in the deployment, a pod will have QoS class of BestEffort and can use as much memory as it is available on the node where it is running. See also my answer to this question: How can I tell how much RAM my Kubernetes pod has?
It is always a good practice, IMHO, to at least specify the min (-Xms
) and max (-Xmx
) JVM heap...
Question #2: "How can I find out whether jvm is performing well?" A: You can start with enabling JMX and then using it to collect JVM and application metrics. Besides the JMX-to-HTTP bridges like Jolokia and Prometheus JMX Exporter, it is also an option to connect directly over JMX. One way is to:
Expose JMX by configuring these JVM startup arguments:
-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.rmi.port=4444 -Djava.rmi.server.hostname=127.0.0.1
Notice that this affixes the otherwise dynamic RMI port and sets hostname for the RMI server.
Forward local ports to these ports on the pod:
kubectl --namespace=<namespace> port-forward <pod-name> 4444:4444 1099:1099
Start locally a tool that can connect to the JVM in the pod over JMX (jconsole, jvisualvm, jmc...based on what's available to you). The JMX URL would be:
service:jmx:rmi://127.0.0.1:4444/jndi/rmi://127.0.0.1:1099/jmxrmi
Question #3: "Is there a profiler I can connect to the jvm running in the pods?" A: The short answer is "yes". I have used JProfiler to remotely profile Java apps running on k8s through port forwarding. (I am not affiliated with JProfiler nor am promoting it - it was simply the tool the team I was helping had a license for)
You can use the Vertical Pod Autoscaler to get CPU/Memory limits automatically updated as per the consumption of your container.
To get JVM metrics I would recommend installing some prometheus plugin in your server and scraping the metrics. Then you can see how many objects are alive and generational information to better understand the cause of memory usage and how to control it.
You can connect a profiler by exposing the relevant ports using kubectl proxy