Kubernetes cluster nodes running out of inodes in /tmp

1/15/2020

Is there a recommended minimum size (or minimum number of inodes) for the /tmp file system (partition) on Kubernetes cluster nodes?

What I am experiencing (on a bare-metal Kubernetes 1.16.3 cluster) is that cluster nodes hit 100% inodes used (according to df -i). This has some negative effects (as one would expect; e.g. kubectl exec ... bash into pods on concerned nodes leads to "no space left on device") but kubectl get nodes (strangely) still reports these nodes as "Ready". The /tmp file systems involved are relatively small, i.e. 2G (121920 inodes) and 524M (35120 inodes) respectively.

-- rookie099
inode
kubernetes

1 Answer

1/16/2020

There is no recommended minimum size for kubernetes. The defaults are good for most cases but is you are creating e.g. many of empty files you might eventually run out of inodes. If you need more you need to manually adjust its number.

-- HelloWorld
Source: StackOverflow