I have a Kubernetes Pod that has
I have 2 containers running inside this pod, one is the actual application (heavy Java app) and a lightweight log shipper.
The pod consistently reports a usage of 1.9-2Gb of memory usage. Because of this, the deployment is scaled (an autoscaling configuration is set which scales pods if memory consumption > 80%), naturally resulting in more pods and more costs
Yellow Line represents application memory usage
However, on deeper investigation, this is what I found.
On exec
ing inside the application container, I ran the top
command, and it reports a total of 16431508 KiB
or roughly 16Gb of memory available, which is the memory available on the Machine.
There are 3 processes running inside the application container, out of which the root process (application) takes 5.9% of memory, which roughly comes out to 0.92Gb.
The log-shipper simply takes 6Mb of memory.
Now, what I don't understand is WHY my pod consistently reports such high usage metrics. Am I missing something ? We're incurring significant costs due to the unintended auto-scaling and would like to fix the same.
In Linux unused memory considered as wasted memory, that's why all "free" RAM i. e. memory not used by application or kernel itself is actively used for caching IO operations, file system metadata, etc. but would be provided to your application if required.
You can get detailed information about your container memory consumption in here:
/sys/fs/cgroup/memory/docker/{id}/memory.stat
If you want to scale your cluster based on memory usage it is better to count only your application size, not container memory usage.