Redistribute pods after adding a node in Kubernetes

5/18/2017

What should I do with pods after adding a node to the Kubernetes cluster?

I mean, ideally I want some of them to be stopped and started on the newly added node. Do I have to manually pick some for stopping and hope that they'll be scheduled for restarting on the newly added node?

I don't care about affinity, just semi-even distribution.

Maybe there's a way to always have the number of pods be equal to the number of nodes?

For the sake of having an example:

I'm using juju to provision small Kubernetes cluster on AWS. One master and two workers. This is just a playground.

My application is apache serving PHP and static files. So I have a deployment, a service of type NodePort and an ingress using nginx-ingress-controller.

I've turned off one of the worker instances and my application pods were recreated on the one that remained working.

I then started the instance back, master picked it up and started nginx ingress controller there. But when I tried deleting my application pods, they were recreated on the instance that kept running, and not on the one that was restarted.

Not sure if it's important, but I don't have any DNS setup. Just added IP of one of the instances to /etc/hosts with host value from my ingress.

-- clorz
kubernetes

2 Answers

6/5/2019

descheduler[1] a kuberenets incubator project could be helpful. Following is the introduction

As Kubernetes clusters are very dynamic and their state change over time, there may be desired to move already running pods to some other nodes for various reasons:

  • Some nodes are under or over utilized.
  • The original scheduling decision does not hold true any more, as taints or labels are added to or removed from nodes, pod/node affinity requirements are not satisfied any more.
  • Some nodes failed and their pods moved to other nodes.
  • New nodes are added to clusters.

[1] https://github.com/kubernetes-incubator/descheduler

-- Bo Wang
Source: StackOverflow

5/18/2017

There is automatic redistribution in Kubernetes when you add a new node. You can force a redistribution of single pods by deleting them and having a host based antiaffinity policy in place​. Otherwise Kubernetes will prefer using the new node for scheduling and thus achieve a redistribution over time.

What are your reasons for a manual triggered redistribution​?

-- Lukas Eichler
Source: StackOverflow