Setting a Kubernetes node as unavailable

12/15/2020

Can any one please tell the options on how to set a Kubernetes node unavailable and reschedule all the pods running on it.

I have tried draining the node, but I am not sure if draining a node actually reschedule the pods running on it to some other node or not?

By using --force option, my only existing pod was evicted/deleted.

-- merchant
kubernetes

3 Answers

12/15/2020

From the docs draining will reschedule existing pods to other nodes and mark the node as not scheduleable using a NoSchedule taint so that no new pods will be scheduled on that node.

-- Arghya Sadhu
Source: StackOverflow

12/15/2020

If you are using Kubernetes deployments/ReplicaSets they should do this for you. Your deployment is configured with a set number of replicas to create for each pod when you remove a node the scheduler will see that the current active number is less than the desired number and automatically create new ones.

If you are just deploying pods without a deployment, then this won't happen and the only solution is manually redeploying - that is why you should use deployment.

NOTE:

When you are running new pods, you are not moving the previously running pods. Every state previous pods has that is not persisted will be gone.

Take a look: rescheduling-pods-kubernetes.

However using command $ kubectl drain is also good idea:

Look at the answer - draining-nodes.

From official documentation:

The 'drain' evicts or deletes all pods except mirror pods (which cannot be deleted through the API server). If there are DaemonSet-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it will not delete any DaemonSet-managed pods, because those pods would be immediately replaced by the DaemonSet controller, which ignores unschedulable markings. If there are any pods that are neither mirror pods nor managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force. --force will also allow deletion to proceed if the managing resource of one or more pods is missing.

You can use chart which will illustrate what happens when you will use $ kubectl drain command.

Try also to use $ kubectl drain with --dry-run argument so you can see its outcome before any changes are applied:

$ kubectl drain <node-name> --force --dry-run

NOTE: It will not show any errors about existing local data or daemonsets which you can see without using --dry-run argument.

-- Malgorzata
Source: StackOverflow

12/15/2020

if you are using Kubernetes 1.5+, kubectl drain <nodename> should do the trick. (See here: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/)

Maybe there was no node left, that could start your Pod? This does not mean, that there is no node left, but maybe the scheduler wasn't able to reschedule your Pod on another node.

Regards

-- GrimKroton
Source: StackOverflow