I am trying to debug the storage usage in my kubernetes pod. I have seen the pod is evicted because of Disk Pressure. When i login to running pod, the see the following
Filesystem Size Used Avail Use% Mounted on
overlay 30G 21G 8.8G 70% /
tmpfs 64M 0 64M 0% /dev
tmpfs 14G 0 14G 0% /sys/fs/cgroup
/dev/sda1 30G 21G 8.8G 70% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 14G 12K 14G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 14G 0 14G 0% /proc/acpi
tmpfs 14G 0 14G 0% /proc/scsi
tmpfs 14G 0 14G 0% /sys/firmware
root@deploy-9f45856c7-wx9hj:/# du -sh /
du: cannot access '/proc/1142/task/1142/fd/3': No such file or directory
du: cannot access '/proc/1142/task/1142/fdinfo/3': No such file or directory
du: cannot access '/proc/1142/fd/4': No such file or directory
du: cannot access '/proc/1142/fdinfo/4': No such file or directory
227M /
root@deploy-9f45856c7-wx9hj:/# du -sh /tmp
11M /tmp
root@deploy-9f45856c7-wx9hj:/# du -sh /dev
0 /dev
root@deploy-9f45856c7-wx9hj:/# du -sh /sys
0 /sys
root@deploy-9f45856c7-wx9hj:/# du -sh /etc
1.5M /etc
root@deploy-9f45856c7-wx9hj:/#
As we can see 21G is consumed, but when i try to run du -sh
it just returns 227M. I would like to find out who(which directory) is consuming the space
According to the docs Node Conditions, DiskPressure
has to do with conditions on the node causing kubelet to evict the pod. It doesn't necessarily mean it's the pod that caused the conditions.
DiskPressure
Available disk space and inodes on either the node’s root filesystem or image filesystem has satisfied an eviction threshold
You may want to investigate what's happening on the node instead.
Looks like the process 1142
is still running and holding file descriptors and/or perhaps some space (You may have other processes and other file descriptors too not being released) Is it the kubelet?. To alleviate the problem you can verify that it's running and then kill it:
$ ps -Af | grep 1142
$ kill -9 1142
P.D. You need to provide more information about the processes and what's running on that node.