How to automatically do rebalancing of pods in aws eks (kubernetes ) across all nodes/workers

11/1/2019

Suppose we have 4 nodes eks cluster in ec2-autoscaling min with 4 nodes. A kubernetes application stack deployed on the same with one pod- one node. Now traffic increases HPA triggered on eks level. Now total pods are 8 pods ,two pods - on one node. Also triggered auto-scaling. Now total nodes are 6 nodes.

Its observed all pods remain in current state. Post autscaling also.

Is there a direct and simpler way? Some of already running pods should automatically launch on the additional nodes (detect it and reschedule itself on the recently added idle worker/nodes (i:e non-utilized - by using force eviction of pods)

Thanks in Advance.

-- kubemaster
amazon-eks
kubernetes

2 Answers

11/1/2019

One easy way is to delete all those pods by selector using below command and let the deployment recreate those pods in the cluster

kubectl delete po -l key=value

There could be other possibilities. would be glad to know from others

-- P Ekambaram
Source: StackOverflow

11/1/2019

Take a look at the Descheduler. This project runs as a Kubernetes Job that aims at killing pods when it thinks the cluster is unbalanced.

The LowNodeUtilization strategy seems to fit your case:

This strategy finds nodes that are under utilized and evicts pods, if possible, from other nodes in the hope that recreation of evicted pods will be scheduled on these underutilized nodes.


Another option is to apply a little of chaos engineering manually, forcing a Rolling Update on your deployment, and hopefully, the scheduler will fix the balance problem when pods are recreated.

You can use the kubectl rollout restart my-deployment. It's way better than simply deleting the pods with kubectl delete pod, as the rollout will ensure availability during the "rebalancing" (although deleting the pods altogether increases your chances for a better rebalance).

-- Eduardo Baitello
Source: StackOverflow