can it be that process inside container used more memory then the container itself?
i have a pod with single container, that based on stackdriver graphs uses 1.6G memory at his peak. at the same time, i saw an error on the container and while looking the root casue i saw on the VM itself oom-killer message that indicate one of the processes inside the container killed due to usage of 2.2G. (rss)
how can it be?
Memory cgroup out of memory: Killed process 2076205 (chrome) total-vm:4718012kB, anon-rss:2190464kB, file-rss:102640kB, shmem-rss:0kB, UID:1001 pgtables:5196kB oom_score_adj:932
10x!
Two pieces. First what you see in the metrics is probably the working set size, which does not include buffers while I think the oom_killer shows rss which does. But more importantly, the data in metrics output is sampled, usually every 30 seconds. So if the memory usage spiked suddenly, or even if it just tried to allocate one huge buffer, then it would be killed.