How to stop a kubernetes cluster?

10/19/2017

I started the claster on a Google compute engine with the help of a kube-up.sh. This script created the master node and minion group. After i dont need it anymore i want to stop a cluster and shutdown all VMs in order not waste money for working of instanses. When i shutdown it (i just shut down all my cluster VMs, because i dont know another way to do it) and then start again in some time my cluster wont work anymore. "kubectl get nodes" dispalays not correct information about nodes ( For example i have A B C nodes == minions, it displays only D that even does not exist) and all comands works very very slow. Maybe i shutdown it not correct. How propery stop cluster and stop VMs in order to start it again in some time? (not delete)

What cluster i have:

kubernetes-master             | us-central1-b
kubernetes-minion-group-nq7f  | us-central1-b
kubernetes-minion-group-gh5k  | us-central1-b

What displays "kubectl get nodes" command:

[root@common frest0512]# kubectl get nodes
NAME                           STATUS                     AGE       VERSION
kubernetes-master              Ready,SchedulingDisabled   7h        v1.8.0
kubernetes-minion-group-02s7   Ready                      7h        v1.8.0
kubernetes-minion-group-92rn   Ready                      7h        v1.8.0
kubernetes-minion-group-kn2c   Ready                      7h        v1.8.0

Before shutdowning master node it was displayed correct (names and count of minions were the same).

-- Viacheslav
google-cloud-platform
google-kubernetes-engine
kubernetes

6 Answers

12/13/2018

Thanks to Carlos for the tip.

You can follow steps below to detach all active nodes from Kubernetes cluster.

1- Go to Kubernetes Engine dashboard and select the cluster.
https://console.cloud.google.com/kubernetes

enter image description here

2- Go to details and click edit, set pool size to zero (0).

enter image description here

3- Validate nodes are shutting down at Compute Engine dashboard
https://console.cloud.google.com/compute

enter image description here

-- Teoman shipahi
Source: StackOverflow

10/27/2017

Unless I'm misunderstanding, it sounds like you're trying to kill the cluster by stopping the VMs individually through Compute Engine. If so, Container Engine is going to see that the VMs have been killed, and start new ones in their place to keep the cluster healthy. If you want to stop the cluster, you have to do so through the Container Engine interface. In this way, it will know that killing the nodes wasn't a mistake, and it will shut down all provisioned VMs and disks.

-- sanche
Source: StackOverflow

4/16/2019

I've found a workaround for this problem as I was facing the same issue due to google cloud charges while working in development phase. Kubernetes strictly restarts the google cloud compute VMs as soon as you give the command to stop those instances because kubenetes is designed to do that. For example:

gcloud compute instances stop [instance-name] --async --zone=us-central1-b

will stop the instance once and then kubernetes will automatically triggers the restart when it realizes that instance has been brought down. This command will work perfectly on any other instance but not with the ones created with kuberneted cluster.

Because kubernetes is handling the compute instances as it’s its tendency to restart the instances which are brought down. In order to stop the instance tp reduce the operational cost, you have to drain the kuberneted nodes using the following command.

To stop the compute VM instance.

  • Fetchthe list od nodes running using kubectl get nodes. This will list down the nodes running in current-context of kuberneted cluster.
  • You might need to take down all the nodes in drain mode in order to stop the kuberneted services. USe the follwing command kubectl drain [pod/node-name].
  • Then use the gcloud command to shut down the compute instance using the following command gcloud compute instances stop [instances-name] --async --zone=[zone]. This will stop the VM instances and kubernetes won't trigger the auto-restart of instance since it's nodes/pods are already down.

To Restart the compute VM instance

  • Use the following command to restart the compute VM instance gcloud compute instances start [instances-name] --async --zone=[zone]. But this command is not enough to start using your kubernetes cluster. You need to uncordon the kubernetes nodes that we've drained in the previous step.
  • Since our nodes are in drain mode, Use this command to bring them up kubectl uncordon [node/pod-name].
  • Now, you'll be able to access the APIs exposed by kubernetes clustered node.

Note: The same solution could be achieved by using google cloud console but I am a DevOps practitioner so I keep making scripts to automate workflows and I hope making a little script to do this for you will help you in the long run.

-- Gaurav Sharma
Source: StackOverflow

3/30/2019

If you are using kubeadm, there is no direct way to graceful shutdown. Only way is reset

kubeadm reset

-- amit23comp
Source: StackOverflow

2/27/2018

Well What simply works for me, is first edit the cluster configuration, in one of the columns is for the number of replicas/pods, reduce this number of pods from 3 to Zero, this way the information is still save, but the resources won't be active, hence you will not be charged.

-- Ose Mike
Source: StackOverflow

2/27/2018

Use kubectl delete deployment deployment_name

Than kubectl delete pods --all

After it kubectl drain node_name will prepared node for maintenance, so you can reboot it or delete using kubectl delete node node_name

And if you deleted all, use kubeadm reset for clear all data and remove cluster or if you using GKE just delete cluster.

-- trigun117
Source: StackOverflow