GCP Kubernetes scale too high

5/14/2019

I have Kubernetes cluster hosted on GCP (Master version: 1.12.7-gke.7, Node version: 1.12.7-gke.7).

Recently i noticed that too many nodes are created, without any stress to the system. My expected average number of nodes is 30 but actually after unwanted scale up it goes to something around 60.

I tried to investigate this issue with

kubectl get hpa

and saw that the average CPU is near 0% - no scaling should be occur here.

Also checked

kubectl get deployments 

and saw that the DESIRED number of pods is equal to the AVAILABLE - so the system don't asked for more resources.

After inspecting the node utilization I saw that around 25 nodes utilize only 200 mCPU which is very low consumption (5% of the node potential).

After a while, the cluster is back to the normal (around 30 nodes) without any significant event.

What's going on here? what I should check next?

-- No1Lives4Ever
google-cloud-platform
google-kubernetes-engine
kubernetes

1 Answer

5/14/2019

The Horizontal Pod Autoscaler automatically scales the number of pods. So alone it can't be responsible for scaling the nodes. However if you have enabled cluster autoscaler this could be possible. Now to debug what is going on you would need logs from your master node, which you have no access to in GKE because it is maintained by google.

In this case my advice is to contact Google Cloud Support.

-- aurelius
Source: StackOverflow