diskpressure on node when deploying using kubernetes pod in pending state

7/30/2020

I am trying to bring up an app that was successfully run using docker swarm. I am using Kubernetes. I have a master VirtualBox VM and a node both Ubuntu. The deployment and service with cpu and memory and limits set correctly ( or so I think). The deployment and service succeeds but kubectl describe pods consistently shows event messages as

0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.
Warning  FailedScheduling  <unknown>  default-scheduler  0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.

I have checked a few things and increased the disk capacities on the master VM (and worker node though problem is shown only in master). Reset the worker node using kubeadm and redeployed.

Also went thru similar questions and suggestions. Looking for suggestions to resolve this toward Pod moving to ready or out of pending state

Thanks

deployment has cpu 500m and limit "1", memory 1Gi and limit 2Gi

-- MNathan
docker
kubernetes

1 Answer

7/31/2020

On the master/main, if you really want the pod scheduled there you can try un-tainting it.

$ kubectl taint nodes --all node-role.kubernetes.io/master-

For the disk pressure taint, you may have something in your control plane from a previous instance of a node with the same name❓ If you know that you have enough space you can force remove it:

$ kubectl taint nodes --all node.kubernetes.io/disk-pressure-

Keep in mind that this is a taint automatically added by the node controller.

✌️

-- Rico
Source: StackOverflow