GKE - Upgrading cluster master after cluster creation completes

10/10/2019

Once we increase load by using JMeter client than my deployed service is interrupted and on GCP/GKE console it says that -

Upgrading cluster master
The values shown below are going to change soon.

And my kubectl client throw this error during upgrade -

Unable to connect to the server: dial tcp 35.236.238.66:443: connectex: No connection could be made because the target machine actively refused it.

How can I stop this upgrade or prevent my service interruption ? If service will be intrupted than there is no benefit of this auto scaling. I am new to GKE, please let me know if I am missing any configuration or parameter here. I am using this command to create my cluster-

gcloud container clusters create ajeet-gke --zone us-east4-b --node-locations us-east4-b --machine-type n1-standard-8 --num-nodes 1 --enable-autoscaling --min-nodes 4 --max-nodes 16

It is not upgrading k8s version. Because it works fine with smaller load but as I increase load than cluster starts upgrade of master. So it looks the master is resizing itself for more nodes. After upgrade I can see more nodes on GCP console. https://github.com/terraform-providers/terraform-provider-google/issues/3385

Below command says auto scaling is not enabled on instance group.

> gcloud compute instance-groups managed list
NAME                     AUTOSCALED  LOCATION    SCOPE   ---
ajeet-gke-cluster-      no        us-east4-b   zone   ---
default-pool-4***0 

Workaround

Sorry forget to update it here, I found a workaround to fix it - after splitting cluster creation command in to two steps cluster is auto scaling without restarting master node:

gcloud container clusters create ajeet-ggs --zone us-east4-b --node-locations us-east4-b --machine-type n1-standard-8 --num-nodes 1
gcloud container clusters update ajeet-ggs --enable-autoscaling --min-nodes 1 --max-nodes 10 --zone us-east4-b --node-pool default-pool
-- Ajeet
google-cloud-platform
google-kubernetes-engine
kubernetes

2 Answers

10/10/2019

The master won't resize the node, unless the autoscaling feature is enabled in it.

As mentioned in above answer, this is a feature at the node-pool level. By looking at description of the issue, it does seems like 'autoscaling' is enabled on your node-pool and eventually a GKE's cluster autoscaler automatically resizes clusters based on the demands of the workloads you want to run(ie when there are pods that are not able to be scheduled due to resource shortages such as CPU).

Additionaly, Kubernetes cluster autoscaling does not use the Managed Instance Group autoscaler. It runs a cluster-autoscaler controller on the Kubernetes master that uses Kubernetes-specific signals to scale your nodes.

It is therefore, highly recommended not use(or rely on the autoscaling status showed by MIG) Compute Engine's autoscaling feature on instance groups created by Kubernetes Engine.

-- Digil
Source: StackOverflow

10/10/2019

To prevent this you should always create your cluster with hardcoded cluster version to the last version available.

See the documentation: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#master

This means that Goolge is managing the master, meaning that if your master is not up to date it will be updated to be in the last version and allow google to limit the number of version currently managed. https://cloud.google.com/kubernetes-engine/docs/concepts/regional-clusters

Now why do you have an interruption of service during the update: because you are in zonal mode with only one master, to prevent this you should go in regional cluster mode with more than one master, allowing for clean rolling update.

-- night-gold
Source: StackOverflow