Is there a way to resize a GKE cluster to 0 nodes after a certain amount of idle time?

9/27/2019

I have a GKE cluster that I want to have sitting at 0 nodes, scale up to 3 nodes to perform a task, and then after a certain amount of idle time, scale back down to 0. Is there a way to do this?

-- Cam
google-cloud-platform
google-kubernetes-engine
kubernetes

3 Answers

9/27/2019

As we can read in the GKE documentation about Cluster autoscaler.

Autoscaling limits

When you autoscale clusters, node pool scaling limits are determined by zone availability.

For example, the following command creates an autoscaling multi-zone cluster with six nodes across three zones, with a minimum of one node per zone and a maximum of four nodes per zone:

gcloud container clusters create example-cluster \
--zone us-central1-a \
--node-locations us-central1-a,us-central1-b,us-central1-f \
--num-nodes 2 --enable-autoscaling --min-nodes 1 --max-nodes 4

The total size of this cluster is between three and twelve nodes, spread across three zones. If one of the zones fails, the total size of cluster becomes between two and eight nodes.

But there are limitations.

Occasionally, cluster autoscaler cannot scale down completely and an extra node exists after scaling down. This can occur when required system Pods are scheduled onto different nodes, because there is no trigger for any of those Pods to be moved to a different node. See I have a couple of nodes with low utilization, but they are not scaled down. Why?. To work around this limitation, you can configure a Pod disruption budget.

Here are the reasons why the nodes might not be scaled down:

  • the node group already has the minimum size,

  • node has the scale-down disabled annotation (see How can I prevent Cluster Autoscaler from scaling down a particular node?)

  • node was unneeded for less than 10 minutes (configurable by --scale-down-unneeded-time flag),

  • there was a scale-up in the last 10 min (configurable by --scale-down-delay-after-add flag),

  • there was a failed scale-down for this group in the last 3 minutes (configurable by --scale-down-delay-after-failure flag),

  • there was a failed attempt to remove this particular node, in which case Cluster Autoscaler will wait for extra 5 minutes before considering it for removal again,

  • using large custom value for --scale-down-delay-after-delete or --scan-interval, which delays CA action.

-- Crou
Source: StackOverflow

9/30/2019

The following command would resize the cluster to zero nodes:

gloud container clusters resize [cluster-name] --size 0 --zone [zone]

Now, it is up to you how you wanna increase or decrease the size of the cluster.

Suppose you have a few things to be deployed and you know how much they will take, increase the size of the cluster with the following command:

gloud container clusters resize [cluster-name] --size 3 --zone [zone]

and once done with the task you wanted to perform, run the above-mentioned command again to resize it to zero. You can write a shell script to automate this thing, provided you are certain about the time needed by the cluster to perform the tasks you want.

-- Amit Yadav
Source: StackOverflow

9/28/2019

A GKE cluster can never scale down to 0 because of the system pods running in the cluster. The pods running in the kube-system namespace count against resource usage in your nodes thus the autoscaler will never make the decision to scale the entire cluster down to 0

It is definitely possible to have individual node pools scale down to 0 though. You may want to consider using 2 different node pools: 1 small one to hold all the system pods (minus daemonset pods) and another larger pool with autoscaling enabled from 0 to X. You can add a taint to this node pool to ensure system pods don't use it.

This will minimize your resource usage during down times, but there is no way to ensure k8s automatically resizes to 0

An alternative, if you have a planned schedule for when the cluster should scale up or down, you can leverage Cloud Scheduler to launch a job that sends an API call to the container API to resize your cluster.

Or you could configure a job in the cluster or a prestop hook in your final job to trigger a Cloud Function

-- Patrick W
Source: StackOverflow