Redistribute load after a Worker node is returned to the cluster

5/14/2018

I have a cluster of 5 worker nodes, then I shutdown one of the workers and the pods being executed in it are distributed to the other nodes.

Now I start again the worker node but the pods are not redistributed again so I have one node almost free.

Is there any way to force kubernetes to redistribute the load to the "new" worker node?

Thank you.

-- Jxadro
kubernetes

4 Answers

5/15/2018

Instead of a deployment you could use a DaemonSet. DaemonSets distribute the pods to all nodes of the cluster. This would automatically distribute your pods to a new node when added. This way you could add a new node before removing the old one and your pods would still be running.

-- James Knott
Source: StackOverflow

5/14/2018

I don't believe there's a builtin mechanic to 'load balance' the cluster. Over time, the system will return to normal by itself.

If you want to trigger a bit of redistribution, you could scale up the deployments that can be run in parallel, then scale them down again. Others you can re-release.

-- rln
Source: StackOverflow

4/9/2019

As @rln mentioned, scaling down and up again can trigger redistribution. These are the commands if you're using kubectl.

Scale down: kubectl scale --replicas=<SIZE LESS THAN YOUR CURRENT DEPLOYMENT> deployment/<DEPLOYMENT_NAME> -n <NAMESPACE>

Wait for pods to terminate.

Scale back up: kubectl scale --replicas=<ORIGINAL DESIRED DEPLOYMENT SIZE> deployment/<DEPLOYMENT_NAME> -n <NAMESPACE>

-- ProGirlXOXO
Source: StackOverflow

5/14/2018

You can't move a pod from one node to another without killing it, So you can go ahead and kill the pods (if there is no impact). Most probably they will be be scheduled to the free node.

There are ways of configuring affinity, but if you shutdown a node again, you are going to find yourself in the same situation, so it won't help.

-- suren
Source: StackOverflow