Kubernetes - Trigger a rebalancing of pods

8/31/2018

I have a kubernetes cluster with a few nodes set up. I want to make sure that pods are distributed efficiently on the nodes.

I'll explain:

Let's assume that I have two nodes: Node 1 - 2gb ram Node 2 - 2gb ram

And I have these pods: Pod 1 - 1gb ram on Node 1 Pod 2 - 100mb ram on Node 1 Pod 3 - 1gb ram on Node 2 Pod 4 - 100mb ram on Node 2

Ok now the problem: let's say I want to add a pod with 1gb ram to the cluster. Currently there's no room in any node so kubernetes won't do it (unless I add another node). I wonder if there's a way that kubernetes will see that it can move Pod 3 to node 1 to make room for the new pod?

Help

-- refaelos
kubernetes
rebalancing
scheduling

1 Answer

8/31/2018

The Kubernetes descheduler incubator project will eventually be integrated into Kubernetes to accommodate rebalancing. This could be prompted by under/overutilization of node resources as your case suggests or for other reasons, such as changes in node taints or affinities.

For your case, you could run the descheduler with the LowNodeUtilization strategy and carefully configured thresholds to have some pods evicted and added back to the pod queue after the new 1gb pod.

Another method could use pod priority classes to cause a lower priority pod to be evicted and make room for the new incoming 1gb job. Pod priorities are enabled by default starting in version 1.11. Priorities aren't intended to be a rebalancing mechanism, but I mention it because it is a viable solution for ensuring a higher priority incoming pod can be scheduled. Priorities deprecate the old rescheduler that will be removed in 1.12.

Edit - include sample policy

The policy I used to test this is below:

apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
  "LowNodeUtilization":
     enabled: true
     params:
       nodeResourceUtilizationThresholds:
         thresholds:
           "memory": 50
         targetThresholds:
           "memory": 51
           "pods": 0
-- logan rakai
Source: StackOverflow