Kubernetes cluster still running even after deleting

1/22/2019

I created a Kubernetes cluster using ansible-playbook command below

ansible-playbook kubectl.yaml --extra-vars "kubernetes_api_endpoint=<Path to aws load balancer server>"

Now I have deleted the cluster using command

kubectl config delete-cluster <Name of cluster>

But still EC2 nodes are running, I tried to manually stop them but they start again automatically (expected because they are running in a cluster)

Is there any way by which I can detach the nodes from the cluster or delete the cluster in total?

Kubectl config view shows below message

apiVersion: v1 clusters: [] contexts: - context: cluster: "" user: "" name: default-context current-context: default-context kind: Config preferences: {} users: - name: cc3.k8s.local user: token: cc3.k8s.local

This means there is no cluster. I want to delete the cluster in total and start fresh.

-- jile singh sorout
amazon-ec2
amazon-web-services
ansible
kubernetes

3 Answers

1/22/2019

As @Jason mentioned delete-cluster is not an option for you if you want to delete cluster completely.

It would be better if you put ansible playbook file content which creates cluster, then we can see how it creates cluster on AWS.

Best and easiest option for me, you can create also simple playbook file to delete cluster by changing relevant module's state to absent in playbook.

Or if it uses EKS, then you can configure your aws command line then simply run i.e aws eks delete-cluster --name devel. For more info click

If it uses Kops, then you can run kops delete cluster --name <name> --yes For more info about Kops CMD click

If you still need help, please put ansible playbook file to question by editing.

-- coolinuxoid
Source: StackOverflow

1/22/2019

The delete-cluster command does this :

delete-cluster Delete the specified cluster from the kubeconfig

It will only delete the context from your ~/.kube/config file. Not delete the actual cluster.

You will need to write a different script for that or go into the AWS console and simply delete the nodes.

--
Source: StackOverflow

2/5/2019

I just ran into this same problem. You need to delete the autoscaling group that spawns the worker nodes, which for some reason isn't deleted when you delete the EKS cluster.

Open the AWS console (console.aws.amazon.com), navigate to the EC2 dashboard, then scroll down the left pane to "Auto Scaling Groups". Deleting the autoscaling group should stop the worker nodes from endlessly spawning. You may also want to click on "Launch Configurations" and delete the template as well.

HTH!

-- prr
Source: StackOverflow