Kubernetes Autoscaler: no downtime for deployments when downscaling is possible?

8/15/2020

In a project, I'm enabling the cluster autoscaler functionality from Kubernetes.

According to the documentation: How does scale down work, I understand that when a node is used for a given time less than 50% of its capacity, then it is removed, together with all of its pods, which will be replicated in a different node if needed.

But the following problem can happen: what if all the pods related to a specific deployment are contained in a node that is being removed? That would mean users might experience downtime for the application of this deployment.

Is there a way to avoid that the scale down deletes a node whenever there is a deployment which only contains pods running on that node?

I have checked the documentation, and one possible (but not good) solution, is to add an annotation to all of the pods containing applications here, but this clearly would not down scale the cluster in an optimal way.

-- Rodrigo Boos
autoscaling
downtime
kubernetes

2 Answers

8/15/2020

Same document you referred to, has this:

How is Cluster Autoscaler different from CPU-usage-based node autoscalers? Cluster Autoscaler makes sure that all pods in the cluster have a place to run, no matter if there is any CPU load or not. Moreover, it tries to ensure that there are no unneeded nodes in the cluster.

CPU-usage-based (or any metric-based) cluster/node group autoscalers don't care about pods when scaling up and down. As a result, they may add a node that will not have any pods, or remove a node that has some system-critical pods on it, like kube-dns. Usage of these autoscalers with Kubernetes is discouraged.

-- FEldin
Source: StackOverflow

8/17/2020

In the same documentation:

What happens when a non-empty node is terminated? As mentioned above, all pods should be migrated elsewhere. Cluster Autoscaler does this by evicting them and tainting the node, so they aren't scheduled there again.

What is the Eviction ?:

The eviction subresource of a pod can be thought of as a kind of policy-controlled DELETE operation on the pod itself.

Ok, but what if all pods get evicted at the same time on the node? You can use Pod Disruption Budget to make sure minimum replicas are always working:

What is PDB?:

A PDB limits the number of Pods of a replicated application that are down simultaneously from voluntary disruptions.

In k8s docs you can also read:

A PodDisruptionBudget has three fields:

A label selector .spec.selector to specify the set of pods to which it applies. This field is required.

.spec.minAvailable which is a description of the number of pods from that set that must still be available after the eviction, even in the absence of the evicted pod. minAvailable can be either an absolute number or a percentage.

.spec.maxUnavailable (available in Kubernetes 1.7 and higher) which is a description of the number of pods from that set that can be unavailable after the eviction. It can be either an absolute number or a percentage.

So if you use PDB for your deployment it should not get deleted all at once.

But please notice that if the node fails for some other reason (e.g hardware failure), you will still experience downtime. If you really care about High Availability consider using pod antiaffinity to make sure the pods are not scheduled all on one node.

-- Matt
Source: StackOverflow