I am really new to Kubernetes.
I have deployed Kubernetes using kops. My question is how can I shutdown my instances (not terminate them) so my data, deployments and services will not be lost.
Currently after editing ig of master and nodes, by changing max and min instance size to 0 inside auto scaling group of EC2, changes my instances into terminated stance. Which also makes me loose my pods and data inside of them?
How to overcome on this issue??
You have actually answered yourself. All that is required to do is to scale the instance size to 0. Following this tutorial, the steps are:
kops edit ig nodes
change minSize
and maxSize
to 0kops get ig
- to get master node namekops edit ig
- change min and max size to 0kops update cluster --yes
kops rolling-update cluster
After that you can see in EC2, that all of the cluster machines are terminated. When you will want to start it again just repeat the steps but change the values to desired number of machines (min 1 master).
I can confirm that all the pods, services and deployments are running again after scaling the cluster back to its initial size. In my case those were nginx pods and hello-minikube pod from Kubernetes documentation example. Did you miss any of these steps that it did not work in your case? Do you have an s3 bucket that stores the cluster state? You need to run these commands before running the kops cluster:
aws s3api create-bucket --bucket ... --region eu-central-1a aws s3api put-bucket-versioning --bucket ... --versioning-configuration
kops lets you manage your clusters even after installation. To do this, it must keep track of the clusters that you have created, along with their configuration, the keys they are using etc. This information is stored in an S3 bucket. S3 permissions are used to control access to the bucket.
This one is after scaling down to 0: