When I resize a cluster size 1000 to 0, GKE node remains about 30 minutes on kubectl get node result

3/27/2017

Please let me a Question. I am using Locust Cluster built on GKE 1000 node (replica sets: 6400). Locust Cluster works fine, but there are some problems in stopping this environment. When I tried to stop by the following command, I faced the problem that the node in a ready state remained on kubectl get node for more than 30 minutes. I think this is a problem in that I can't restart the cluster readily.

gcloud compute instance-groups managed resize gke-locust-xxxx --zone asia-east1-a --size 0

Because of GKE, I can not see what is happening on the master node. Are there any possible reasons? Or is it the specification of kubernetes performance that it takes this time.

-- umiyosh
google-kubernetes-engine
kubernetes

1 Answer

3/27/2017

... I faced the problem that the node in a ready state remained on kubectl get node for more than 30 minutes.

Did you watch the number of VMs in your managed instance group over that 30 minute period? How long did it take for all of them to disappear? If you delete a VM from a GKE cluster, it should be removed from kubectl get nodes within a couple of minutes, so it seems that the slowness is likely from removing the VMs themselves.

-- Robert Bailey
Source: StackOverflow