Kubernetes Pod reporting more memory usage than actual process consumption

10/24/2018

I have a Kubernetes Pod that has

  • Requested Memory of 1500Mb
  • Memory Limit of 2048Mb

I have 2 containers running inside this pod, one is the actual application (heavy Java app) and a lightweight log shipper.

The pod consistently reports a usage of 1.9-2Gb of memory usage. Because of this, the deployment is scaled (an autoscaling configuration is set which scales pods if memory consumption > 80%), naturally resulting in more pods and more costs

Yellow Line represents application memory usage

enter image description here

However, on deeper investigation, this is what I found.

On execing inside the application container, I ran the top command, and it reports a total of 16431508 KiB or roughly 16Gb of memory available, which is the memory available on the Machine.

There are 3 processes running inside the application container, out of which the root process (application) takes 5.9% of memory, which roughly comes out to 0.92Gb.

The log-shipper simply takes 6Mb of memory.

Now, what I don't understand is WHY my pod consistently reports such high usage metrics. Am I missing something ? We're incurring significant costs due to the unintended auto-scaling and would like to fix the same.

-- bholagabbar
kubernetes
linux
memory-management
process

1 Answer

10/24/2018

In Linux unused memory considered as wasted memory, that's why all "free" RAM i. e. memory not used by application or kernel itself is actively used for caching IO operations, file system metadata, etc. but would be provided to your application if required.

You can get detailed information about your container memory consumption in here:

/sys/fs/cgroup/memory/docker/{id}/memory.stat

If you want to scale your cluster based on memory usage it is better to count only your application size, not container memory usage.

-- getslaf
Source: StackOverflow