I would like to deploy a sidecar container that is measuring the memory usage (and potentially also CPU usage) of the main container in the pod and then send this data to an endpoint.
I was looking at cAdvisor, but Google Kubernetes Engine has hardcoded 10s measuring interval, and I need 1s granularity. Deploying another cAdvisor is an option, but I need those metrics only for a subset of pods, so it would be wasteful.
Is it possible to write a sidecar container that monitors the main container metrics? If so, what tools could the sidecar use to gather the data?
This could be done by sharing the process namespace for the Pod. Then the sidecar container would be able to see the processes from the main container (e.g. via ps
), and would be able to monitor the CPU / Memory usage with standard unix tools.
One tool could be node-exporter, with the processes
collector enabled. This can then be monitored by Prometheus
See topz, a simple utility to expose top command as web interface.
That one second granularity will be probably the main showstopper for many monitoring tools. In theory you can script it on your own. You can use Docker stats API and read stats stream only for main pod. You will need to mount /var/run/docker.sock to the sidecar container. Curl example:
curl -N --unix-socket /var/run/docker.sock http:/containers/<container-id>/stats
Another option is to read metric from cgroups. But you will need more calculations in this case. Mounting of croups to the sidecar container will be required. See some examples of cgroup pseudo-files on https://docs.docker.com/config/containers/runmetrics/
You can use Prometheus and Grafana for memory and cpu usage and monitoring. These are open source tools and can be used on production environment as well.