After creating a simple hello world deployment, my pod status shows as "PENDING". When I run kubectl describe pod
on the pod, I get the following:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 14s (x6 over 29s) default-scheduler 0/1 nodes are available: 1 NodeUnderDiskPressure.
If I check on my node health, I get:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Fri, 27 Jul 2018 15:17:27 -0700 Fri, 27 Jul 2018 14:13:33 -0700 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Fri, 27 Jul 2018 15:17:27 -0700 Fri, 27 Jul 2018 14:13:33 -0700 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure True Fri, 27 Jul 2018 15:17:27 -0700 Fri, 27 Jul 2018 14:13:43 -0700 KubeletHasDiskPressure kubelet has disk pressure
Ready True Fri, 27 Jul 2018 15:17:27 -0700 Fri, 27 Jul 2018 14:13:43 -0700 KubeletReady kubelet is posting ready status. AppArmor enabled
So it seems the issue is that "kubelet has disk pressure" but I can't really figure out what that means. I can't SSH into minikube and check on its disk space because I'm using VMWare Workstation with --vm-driver=none
.
This is an old question but I just saw it and because it doesn't have an naswer yet I will write my answer.
I was facing this problem and my pods were getting evicted many times because of disk pressure and different commands such as df
or du
were not helpful.
With the help of the answer that I wrote at https://serverfault.com/a/994413/509898 I found out that the main problem is the log files of the pods and because K8s is not supporting log rotation they can grow to hundreds of Gigs.
There are different log rotation methods available but I currently I am searching for the best practice for K8s so I can't suggest any specific one, yet.
I hope this can be helpful.
Community hinted you the comments above. Will try to consolidate it.
The
kubelet
maps one or more eviction signals to a corresponding node condition.If a hard eviction threshold has been met, or a soft eviction threshold has been met independent of its associated grace period, the
kubelet
reports a condition that reflects the node is under pressure.
DiskPressure
Available disk space and inodes on either the node’s root filesystem or image filesystem has satisfied an eviction threshold
So the problem might be not enough disk space or filesystem has run out of inodes. You have to learn about the conditions of your environment and then apply them in your kubelet configuration.
You do not need to ssh into the minikube since you are running it inside of your host: --vm-driver=none -
option that runs the Kubernetes components on the host and not in a VM. Docker is required to use this driver but no hypervisor. If you use
--vm-driver=none
, be sure to specify a bridge network for docker. Otherwise it might change between network restarts, causing loss of connectivity to your cluster.
You might try to check if there are some issues related to the mentioned topics:
kubectl describe nodes
Look at df
reports:
df -i
df -h
Some further reading so you can grasp the topic: Configure Out Of Resource Handling - section Node Conditions.