Can't delete Kubernetes cluster deployed with Kops at AWS

1/8/2019

I can't delete/update a cluster. I'm getting:

    I0107 19:54:02.618454 8397 request_logger.go:45] AWS request: autoscaling/DescribeAutoScalingGroups
I0107 19:54:02.812764 8397 request_logger.go:45] AWS request: ec2/DescribeNatGateways
W0107 19:54:03.032646 8397 executor.go:130] error running task "ElasticIP/us-east-1a.my.domain" (9m56s remaining to succeed): error finding AssociatedNatGatewayRouteTable: error listing NatGateway %!q(*string=0xc42169eb08): NatGatewayNotFound: NAT gateway nat-083300682d9a0fa74 was not found
status code: 400, request id: 8408a79d-1f8f-4886-83d9-ae0a26c1cc47
I0107 19:54:03.032738 8397 executor.go:103] Tasks: 98 done / 101 total; 1 can run
I0107 19:54:03.032828 8397 executor.go:178] Executing task "ElasticIP/us-east-1a.my.domain": *awstasks.ElasticIP {"Name":"us-east-1a.my.domain","Lifecycle":"Sync","ID":null,"PublicIP":null,"TagOnSubnet":null,"Tags":{"KubernetesCluster":"my.domain","Name":"us-east-1a.my.domain","kubernetes.io/cluster/my.domain":"owned"},"AssociatedNatGatewayRouteTable":{"Name":"private-us-east-1a.my.domain","Lifecycle":"Sync","ID":"rtb-089bd4ffc062a3b15","VPC":{"Name":"my.domain","Lifecycle":"Sync","ID":"vpc-0b638e55c11fc9021","CIDR":"172.10.0.0/16","EnableDNSHostnames":null,"EnableDNSSupport":true,"Shared":true,"Tags":null},"Shared":false,"Tags":{"KubernetesCluster":"my.domain","Name":"private-us-east-1a.my.domain","kubernetes.io/cluster/my.domain":"owned","kubernetes.io/kops/role":"private-us-east-1a"}}}
I0107 19:54:03.033039 8397 natgateway.go:205] trying to match NatGateway via RouteTable rtb-089bd4ffc062a3b15
I0107 19:54:03.033304 8397 request_logger.go:45] AWS request: ec2/DescribeRouteTables
I0107 19:54:03.741980 8397 request_logger.go:45] AWS request: ec2/DescribeNatGateways
W0107 19:54:03.981744 8397 executor.go:130] error running task "ElasticIP/us-east-1a.my.domain" (9m55s remaining to succeed): error finding AssociatedNatGatewayRouteTable: error listing NatGateway %!q(*string=0xc4217e8da8): NatGatewayNotFound: NAT gateway nat-083300682d9a0fa74 was not found
status code: 400, request id: 3be6843a-38e2-4584-b2cd-b29f6a132d2d
I0107 19:54:03.981881 8397 executor.go:145] No progress made, sleeping before retrying 1 failed task(s)
I0107 19:54:13.982261 8397 executor.go:103] Tasks: 98 done / 101 total; 1 can run

I change kubectl version to do some tasks for other clusters and then got back to latest, I've been testing new clusters deleting, creating, updating with no issues...until now, I have this cluster that I can't modify and spending money, sure I can remove kops IAM but I use it for other environments at the same account.

At least, is there a file where I can edit what kops' looking at AWS so I can remove this object? I couldn't find at config/spec S3 files.

I have a deployed cluster that I can't use due to this, sure I can deny kops permissions and delete the cluster so kops can't recreate it, but I have other clusters as well.

kops version: Version 1.10.0 (git-8b52ea6d1)

-- Rancor
amazon-web-services
kops
kubernetes

3 Answers

1/10/2019

I deleted the bucket and then all resources manually.

For future readers, enable versioning at the bucket where you export the cluster config.

-- Rancor
Source: StackOverflow

2/15/2019

We ran into the same issue a few minutes ago. We were able to fix it by searching for VPC RouteTable entries which pointed to the respective NatGateway (Status was Blackhole). After deleting those, we we're finally able to delete the cluster without any additional issues.

We were pointed in the right direction by this issue comment.

-- John
Source: StackOverflow

9/20/2019

Just deleting the master node the cluster dies. I had the similar issue while I was testing KOPS and resulted into a little payment. When I deleted a child node a new one created immediately and it is understandable. So I deleted master node and the cluster died.

-- Cem Yasar
Source: StackOverflow