Move all pods to same node after scaling down in AKS

4/30/2021

We have a deployment in AKS that we scale down in the evening to 3 and up to 9 in the morning.

There are resources on a node for 3 of those pods only. So in the morning 2 new nodes should be created by AKS.

But what happens is that the the scale down in the evening sometimes kills 2 pods on every node to keep 3 running nodes with 1 pod on them. Sometimes 2 nodes, 1 with 1 pod and 1 with 2 pods.

What we want is to run 1 node with 3 pods at night.

How can we accomplish this?

-- Geert
azure-aks
kubernetes

1 Answer

4/30/2021

Have you checked Cluster Autoscaler? In theory this can help achieve what you asking for?

Cluster Autoscaler is a standalone program that adjusts the size of a Kubernetes cluster to meet the current needs.

Cluster Autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:

  • there are pods that failed to run in the cluster due to insufficient resources.
  • there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.

Why exactly this autoscaler or How is Cluster Autoscaler different from CPU-usage-based node autoscalers ?

Cluster Autoscaler makes sure that all pods in the cluster have a place to run, no matter if there is any CPU load or not. Moreover, it tries to ensure that there are no unneeded nodes in the cluster.

CPU-usage-based (or any metric-based) cluster/node group autoscalers don't care about pods when scaling up and down. As a result, they may add a node that will not have any pods, or remove a node that has some system-critical pods on it, like kube-dns. Usage of these autoscalers with Kubernetes is discouraged.

Check cluster-autoscaler aks addon example

-- Vit
Source: StackOverflow