I'm using v1.0.3 and kubect get pods show hundreds of pods in state OutOfDisk.
Bug oder Feature?
aws-domains-xps5u 0/1 OutOfDisk 0 1h
aws-domains-xxs0w 0/1 OutOfDisk 0 46m
aws-domains-xxw1a 0/1 OutOfDisk 0 1h
aws-domains-xy3oh 0/1 OutOfDisk 0 1h
aws-domains-xy980 0/1 OutOfDisk 0 1h
aws-domains-xz0ho 0/1 OutOfDisk 0 1h
aws-domains-xz417 0/1 OutOfDisk 0 1h
aws-domains-y0kux 0/1 OutOfDisk 0 1h
aws-domains-y3bg7 0/1 OutOfDisk 0 1h
aws-domains-y4n11 0/1 OutOfDisk 0 39m
aws-domains-y7w1w 0/1 OutOfDisk 0 38m
aws-domains-y8g22 0/1 OutOfDisk 0 52m
aws-domains-y8zaq 0/1 OutOfDisk 0 1h
aws-domains-ya9x8 0/1 OutOfDisk 0 1h
aws-domains-yauq5 0/1 OutOfDisk 0 1h
aws-domains-yblkl 0/1 OutOfDisk 0 38m
This is somewhat working as intended, even though it is not ideal. The series of actions leading to this situation are:
(1)-(3) would keep happening if the situation doesn't change (e.g., disk space being freed or new nodes being added). Kubernetes currently doesn't garbage collect the terminated pods, so you'd see many of them in the kubectl get pods
output.
Below are some related github issues to handle this situation better:
Meanwhile, you probably want to check the specific node to see why it is running out of disk.