Can someone explain why the active_file and inactive_file values are much greater than cache in my docker container's cgroup memory.stat file?

11/23/2019

Can anyone explain how it's possible active_file + inactive_file (~228MiB) is much greater than cache (~537KiB)?

My understanding is that cache should include active_file and inactive_file, so how can the cache value be so low?

Note: These stats are from a container running fluentd in a kubernetes cluster, for streaming logs from all the pods in the node to aws cloudwatch, so there is a lot of file I/O going on with containers writing to the log files, and fluentd reading from the log files. (I wonder if this shared file access pattern has something to do with it...)

/sys/fs/cgroup/memory# cat memory.stat
cache 536576
rss 404602880
rss_huge 0
shmem 0
mapped_file 0
dirty 32768
writeback 0
swap 0
pgpgin 149468
pgpgout 50557
pgfault 904076
pgmajfault 0
inactive_anon 0
active_anon 176812032
inactive_file 216268800
active_file 12009472
unevictable 0
hierarchical_memory_limit 419430400
hierarchical_memsw_limit 419430400
total_cache 536576
total_rss 404602880
total_rss_huge 0
total_shmem 0
total_mapped_file 0
total_dirty 32768
total_writeback 0
total_swap 0
total_pgpgin 149468
total_pgpgout 50557
total_pgfault 904076
total_pgmajfault 0
total_inactive_anon 0
total_active_anon 176812032
total_inactive_file 216268800
total_active_file 12009472
total_unevictable 0
-- David Shin
cgroups
docker
kubernetes
memory

0 Answers