I've had a "stuck" namespace that I deleted showing in this eternal "terminating" status.
Step 1: Dump the descriptor as JSON to a file
kubectl get namespace YOURNAMESPACE -o json > logging.json
Open the file for editing:
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"creationTimestamp": "2019-05-14T13:55:20Z",
"labels": {
"name": "logging"
},
"name": "logging",
"resourceVersion": "29571918",
"selfLink": "/api/v1/namespaces/logging",
"uid": "e9516a8b-764f-11e9-9621-0a9c41ba9af6"
},
"spec": {
"finalizers": [
**"kubernetes"**
]
},
"status": {
"phase": "Terminating"
}
}
Remove kubernetes from the finalizers array:
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"creationTimestamp": "2019-05-14T13:55:20Z",
"labels": {
"name": "logging"
},
"name": "logging",
"resourceVersion": "29571918",
"selfLink": "/api/v1/namespaces/logging",
"uid": "e9516a8b-764f-11e9-9621-0a9c41ba9af6"
},
"spec": {
"finalizers": [
]
},
"status": {
"phase": "Terminating"
}
}
Step 2: Executing our cleanup command
Now that we have that setup we can instruct our cluster to get rid of that annoying namespace:
curl -k -H "Content-Type: application/json" -X PUT --data-binary @logging.json http://127.0.0.1:8001/api/v1/namespaces/YOURNAMESPACE/finalize
Enjoy
Run kubectl get apiservice
For the above command you will find an apiservice with Available Flag=Flase.
So, just delete that apiservice using kubectl delete apiservice <apiservice name>
After doing this, the namespace with terminating status will disappear.
There are a couple of things you can run. But what this usually means, is that the automatic deletion of namespace was not able to finish, and there is a process running that has to be manually deleted. To find this you can do these things:
Get all prossesse attached to the name space. If this does not result in anything move on to next suggestions
$ kubectl get all -n your-namespace
Some namespaces have apiserivces attached to them and it can be troublesome to delete. This can for that matter be whatever resources you want. Then you delete that resource if it finds anything
$ kubectl get apiservice|grep False
But the main takeaway, is that there might be some things that is not completly removed. So you can see what you initially had in that namespace, and then see what things is spun up with your YAMLs to see the processes up. Or you can start to google why wont service X be properly removed, and you will find things.
Assuming you've already tried to force-delete resources like: Pods stuck at terminating status, and your at your wits' end trying to recover the namespace...
You can force-delete the namespace (perhaps leaving dangling resources):
(
NAMESPACE=your-rogue-namespace
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
)
This is a refinement of the answer here, which is based on the comment here.
I'm using the jq
utility to programmatically delete elements in the finalizers section. You could do that manually instead.
kubectl proxy
creates the listener at 127.0.0.1:8001
by default. If you know the hostname/IP of your cluster master, you may be able to use that instead.
The funny thing is that this approach seems to work even when using kubectl edit
making the same change has no effect.
This is caused by resources still existing in the namespace that the namespace controller is unable to remove.
This command (with kubectl 1.11+) will show you what resources remain in the namespace:
kubectl api-resources --verbs=list --namespaced -o name \
| xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace>
Once you find those and resolve and remove them, the namespace will be cleaned up
For us it was the metrics-server
crashing.
So to check if this is relevant to you'r case with the following run: kubectl api-resources
If you get
error: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Then its probably the same issue
Credits goes to @javierprovecho here
Simple trick
You can edit namespace on console only kubectl edit <namespace name>
remove/delete "Kubernetes" from inside the finalizer section and press enter or save/apply changes.
in one step also you can do it.
Trick : 1
kubectl get namespace annoying-namespace-to-delete -o json > tmp.json
then edit tmp.json
and remove"kubernetes"
Open another terminal and Run kubectl proxy
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json https://localhost:8001/api/v1/namespaces/<NAMESPACE NAME TO DELETE>
/finalize
and it should delete your namespace.
Trick : 2
Check the kubectl cluster-info
1. kubectl cluster-info
Kubernetes master is running at https://localhost:6443
KubeDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use
2. kubectl cluster-info dump
now start the proxy using command :
3. kubectl proxy
kubectl proxy & Starting to serve on 127.0.0.1:8001
find namespace
4. `kubectl get ns`
{Your namespace name} Terminating 1d
put it in file
5. kubectl get namespace {Your namespace name} -o json > tmp.json
edit the file tmp.json
and remove the finalizers
}, "spec": { "finalizers": [ "kubernetes" ] },
after editing it should look like this
}, "spec": { "finalizers": [ ] },
we almost there simply now run the command
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/{Your namespace name}/finalize
and it's gone
**
As mentioned before in this thread there is another way to terminate a namespace using API not exposed by kubectl by using a modern version of kubectl where kubectl replace --raw
is available (not sure from which version). This way you will not have to spawn a kubectl proxy
process and avoid dependency with curl (that in some environment like busybox is not available). In the hope that this will help someone else I left this here:
kubectl get namespace "stucked-namespace" -o json \
| tr -d "\n" | sed "s/\"finalizers\": \[[^]]\+\]/\"finalizers\": []/" \
| kubectl replace --raw /api/v1/namespaces/stucked-namespace/finalize -f -
Completing the already great answer by nobar. If you deployed your cluster with Rancher there is a caveat.
Rancher deployments change EVERY api call, prepending /k8s/clusters/c-XXXXX/
to the URLs.
The id of the cluster on rancher (c-XXXXX
) is something you can easily get from the Rancher UI, as it will be there on the URL.
So after you get that cluster id c-xxxx, just do as nobar says, just changing the api call including that rancher bit.
(
NAMESPACE=your-rogue-namespace
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" \
-X PUT --data-binary @temp.json \
127.0.0.1:8001/k8s/clusters/c-XXXXX/api/v1/namespaces/$NAMESPACE/finalize
)
Run the following command to view the namespaces that are stuck in the Terminating state:
kubectl get namespaces
Select a terminating namespace and view the contents of the namespace to find out the finalizer. Run the following command:
kubectl get namespace -o yaml
Your YAML contents might resemble the following output:
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: 2019-12-25T17:38:32Z
deletionTimestamp: 2019-12-25T17:51:34Z
name: <terminating-namespace>
resourceVersion: "4779875"
selfLink: /api/v1/namespaces/<terminating-namespace>
uid: ******-****-****-****-fa1dfgerz5
spec:
finalizers:
- kubernetes
status:
phase: Terminating
Run the following command to create a temporary JSON file:
kubectl get namespace -o json >tmp.json
Edit your tmp.json file. Remove the kubernetes value from the finalizers field and save the file. Output would be like:
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"creationTimestamp": "2018-11-19T18:48:30Z",
"deletionTimestamp": "2018-11-19T18:59:36Z",
"name": "<terminating-namespace>",
"resourceVersion": "1385077",
"selfLink": "/api/v1/namespaces/<terminating-namespace>",
"uid": "b50c9ea4-ec2b-11e8-a0be-fa163eeb47a5"
},
"spec": {
},
"status": {
"phase": "Terminating"
}
}
To set a temporary proxy IP and port, run the following command. Be sure to keep your terminal window open until you delete the stuck namespace:
kubectl proxy
Your proxy IP and port might resemble the following output:
Starting to serve on 127.0.0.1:8001
From a new terminal window, make an API call with your temporary proxy IP and port:
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/your_terminating_namespace/finalize
Your output would be like:
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "<terminating-namespace>",
"selfLink": "/api/v1/namespaces/<terminating-namespace>/finalize",
"uid": "b50c9ea4-ec2b-11e8-a0be-fa163eeb47a5",
"resourceVersion": "1602981",
"creationTimestamp": "2018-11-19T18:48:30Z",
"deletionTimestamp": "2018-11-19T18:59:36Z"
},
"spec": {
},
"status": {
"phase": "Terminating"
}
}
The finalizer parameter is removed. Now verify that the terminating namespace is removed, run the following command:
kubectl get namespaces
The only way I found to remove a "terminating
" namespace is by deleting the entry inside the "finalizers" section. I've tried to --force
delete it and to --grace-period=0
none of them worked, however, this method did:
on a command line display the info from the namespace:
$ kubectl get namespace your-rogue-namespace -o yaml
This will give you yaml output, look for a line that looks similar to this:
deletionTimestamp: 2018-09-17T13:00:10Z
finalizers:
- Whatever content it might be here...
labels:
Then simply edit the namespace configuration and delete the items inside that finalizers container.
$ kubectl edit namespace your-rogue-namespace
This will open an editor (in my case VI), went over the line I wanted to delete and deleted it, I pressed the D key twice to delete the whole line.
Save it, quit your editor, and like magic. The rogue-namespace should be gone.
And to confirm it just:
$ kubectl get namespace your-rogue-namespace -o yaml
The simplest and most easiest way of doing this is copying this bash script
#!/bin/bash
###############################################################################
# Copyright (c) 2018 Red Hat Inc
#
# See the NOTICE file(s) distributed with this work for additional
# information regarding copyright ownership.
#
# This program and the accompanying materials are made available under the
# terms of the Eclipse Public License 2.0 which is available at
# http://www.eclipse.org/legal/epl-2.0
#
# SPDX-License-Identifier: EPL-2.0
###############################################################################
set -eo pipefail
die() { echo "$*" 1>&2 ; exit 1; }
need() {
which "$1" &>/dev/null || die "Binary '$1' is missing but required"
}
# checking pre-reqs
need "jq"
need "curl"
need "kubectl"
PROJECT="$1"
shift
test -n "$PROJECT" || die "Missing arguments: kill-ns <namespace>"
kubectl proxy &>/dev/null &
PROXY_PID=$!
killproxy () {
kill $PROXY_PID
}
trap killproxy EXIT
sleep 1 # give the proxy a second
kubectl get namespace "$PROJECT" -o json | jq 'del(.spec.finalizers[] | select("kubernetes"))' | curl -s -k -H "Content-Type: application/json" -X PUT -o /dev/null --data-binary @- http://localhost:8001/api/v1/namespaces/$PROJECT/finalize && echo "Killed namespace: $PROJECT"
# proxy will get killed by the trap
Add the above code in the deletenamepsace.sh file.
And then execute it by providing namespace as parameter(linkerd is the namespace i wanted to delete here)
➜ kubectl get namespaces
linkerd Terminating 11d
➜ sh deletenamepsace.sh linkerd
Killed namespace: linkerd
➜ kubectl get namespaces
The above tip has worked for me.
Honestly i think kubectl delete namespace mynamespace --grace-period=0 --force is not at all worth trying.
Special Thanks to Jens Reimann! I think this script should be incorporated in kubectl commands.