Is there a relation between docker log files and cached memory?

12/6/2019

enter image description hereI am running my applications on a bare-metal kubernetes cluster which uses Ubuntu 18.04. For a long time I had problems with cached memory. Some of my components were caching memory and although the used memory was around 1% of the machine because the cached was around 90% kubelet was evicting all the pods on that machine.

Recently I also faced disk pressure which was caused by the log files (at /var/lib/docker/containers/*/*-json.log) of pods running on my machines. After I activate the log rotation of docker via adding:

"log-driver": "json-file",
"log-opts": {
  "max-size": "10m",
  "max-file": "3"
}

to the /etc/docker/daemon.json I noticed an interesting side effect. As you can see from the below chart, at the same time that I added the log rotation the cache memory was also disappeared. My question is, what is the relation between the docker log files and the cache memory?

-- AVarf
docker
kubernetes
logging
memory
memory-management

1 Answer

12/6/2019

Linux caches disk access to RAM to speed up future read requests, this is expected and desirable behavior. When applications need more RAM, this disk cache may be pruned. Also, if large files are deleted, I'd expect their cache would also be removed.

The issue here is whether you count this disk cache when looking at available memory. Typically you don't, since applications can use that memory when needed. But some tools like Kubernetes appear to count it when evicting pods: https://github.com/kubernetes/kubernetes/issues/43916

-- BMitch
Source: StackOverflow