Why Kubernetes sets disk-pressure taint to my node?

10/31/2020

How to figure out why the taint is set?

Here is my eviction config from kubelet config:

Kubelet-config:

kubeletArguments:
  eviction-soft:
  - memory.available<100Mi
  - nodefs.available<100Mi
  - nodefs.inodesFree<1%
  - imagefs.available<100Mi
  - imagefs.inodesFree<1%
  eviction-soft-grace-period:
  - memory.available=1m30s
  - nodefs.available=1m30s
  - nodefs.inodesFree=1m30s
  - imagefs.available=1m30s
  - imagefs.inodesFree=1m30s
  eviction-hard:
  - memory.available<100Mi
  - nodefs.available<100Mi
  - nodefs.inodesFree<1%
  - imagefs.available<100Mi
  - imagefs.inodesFree<1%

dh -f output shows that there is 3.8GiB of 20GiB total space available. (>100Mi as configured), so neither soft nor hard eviction threshold is reached. df -i says only 20% of inodes used.

I've tried to figure out the reason by issuing sudo journalctl -u kubelet -b | grep pressure but found nothing useful. Maybe someone could suggest better keywords?

-- Vitaliy Ivanov
kubelet
kubernetes

1 Answer

11/1/2020

Resolved. It turned out I used wrong syntax to configure thresholds. Here is correct way to set those:

evictionSoft:
  memory.available: "100Mi"
  nodefs.available: "100Mi"
  nodefs.inodesFree: "1%"
  imagefs.available: "100Mi"
  imagefs.inodesFree: "1%"
evictionSoftGracePeriod:
  memory.available: 5m
  nodefs.available: 5m
  nodefs.inodesFree: 5m
  imagefs.available: 5m
  imagefs.inodesFree: 5m
evictionHard:
  memory.available: "100Mi"
  nodefs.available: "100Mi"
  nodefs.inodesFree: "1%"
  imagefs.available: "100Mi"
  imagefs.inodesFree: "1%"

(config file is located at /var/lib/kubelet/config.yaml in my case)

Then kubelet needs to be restarted: sudo systemctl restart kubelet

And here is a useful command to get kubelet logs to check if it started correctly: journalctl -u kubelet --since "1min ago"

-- Vitaliy Ivanov
Source: StackOverflow