I'm trying to get a breakdown of the memory usage of my pods running on Kubernetes. I can see the pod's memory usage through kubectl top pod
but what I need is a total breakdown of where the memory is used.
My container might download or write new files to disk, so I'd like to see at a certain moment how much of the used memory is used for each file and how much is used by software running. currently, there's no real disk but just the TempFS, so that means every file is consuming the allocated memory resources, that is okay as long as I can inspect and know how much memory is where.
Couldn't find anything like that, it seems that cAdvisor
helps to get memory statics but it just uses docker/cgroups
which doesn't give a breakdown as I described.
A better solution would be to install a metrics server along with Prometheus and Grafana in your cluster. Prometheus will scrap the metrics which can be used by Grafana for displaying as graphs. This might be useful.
If you want the processes consumption inside the container, you can go into the container and monitor the processes.
$ docker exec -it <container-name> watch ps -aux
Moreover, you can check docker stats.
Following Linux command will summarize the sizes of the directories:
$ du -h