We have one cluster where it seems that namespaces never want to be deleted completely and now can't re-create custom-metrics namespace to be able to collect custom metrics to properly setup HPA. I fully understand that I can create another namespace with all custom-metrics resources, but a little concerned with the overall health of the cluster, given that the namespaces get stuck in "Terminating" state
$ kubectl get ns
NAME STATUS AGE
cert-manager Active 14d
custom-metrics Terminating 7d
default Active 222d
nfs-share Active 15d
ingress-nginx Active 103d
kube-public Active 222d
kube-system Active 222d
lb Terminating 4d
monitoring Terminating 6d
production Active 221d
I already tried to export namespaces to JSON, delete finalizers and re-create using edited JSON files. also tried to kubectl edit ns custom-metrics and delete "- kubernetes" finalizer. all to no avail.
does anyone have any other recommendations on how else I can try to destroy these "stuck" namespaces"
curl to https://master-ip/api/v1/namespace/...../finalize doesn't seem to work on Google Kubernetes Engine for me, I'm assuming these operations are not allowed on GKE cluster
Trying things like doesn't work as well:
$ kubectl delete ns custom-metrics --grace-period=0 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (Conflict): Operation cannot be fulfilled on namespaces "custom-metrics": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system.
and there're no resources listed in this namespaces at all: kubectl get all -n custom-metrics
or looping through all api-resources in this namespace shows no resources exist at all: kubectl api-resources --namespaced=true -o name | xargs -n 1 kubectl get -n custom-metrics
Was able to reproduce by installing a Prometheus operator from this repo and then just trying to delete a namespace.
First run:
k apply -f manifests/
That command creates monitoring
namespace, a bunch of namespaced resources like deployments and configmaps as well as non-namespaced ones like roles etc.
Then imperatively deleting the namespace:
k delete ns monitoring
with an idea the Kubernetes will delete all the corresponding resources. As a result all objects in the namespace were deleted however namespace itself get stuck in the Terminated
state
Just to illustrate, here is a list of stray resources left after "deleting" the namespace. Those resources got deleted only after running the kubectl delete
on the corresponding folder:
customresourcedefinition.apiextensions.k8s.io "podmonitors.monitoring.coreos.com" deleted
customresourcedefinition.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" deleted
customresourcedefinition.apiextensions.k8s.io "prometheusrules.monitoring.coreos.com" deleted
customresourcedefinition.apiextensions.k8s.io "servicemonitors.monitoring.coreos.com" deleted
clusterrole.rbac.authorization.k8s.io "prometheus-operator" deleted
clusterrolebinding.rbac.authorization.k8s.io "prometheus-operator" deleted
clusterrole.rbac.authorization.k8s.io "kube-state-metrics" deleted
clusterrolebinding.rbac.authorization.k8s.io "kube-state-metrics" deleted
clusterrole.rbac.authorization.k8s.io "node-exporter" deleted
clusterrolebinding.rbac.authorization.k8s.io "node-exporter" deleted
apiservice.apiregistration.k8s.io "v1beta1.metrics.k8s.io" deleted
clusterrole.rbac.authorization.k8s.io "prometheus-adapter" deleted
clusterrole.rbac.authorization.k8s.io "system:aggregated-metrics-reader" deleted
clusterrolebinding.rbac.authorization.k8s.io "prometheus-adapter" deleted
clusterrolebinding.rbac.authorization.k8s.io "resource-metrics:system:auth-delegator" deleted
clusterrole.rbac.authorization.k8s.io "resource-metrics-server-resources" deleted
rolebinding.rbac.authorization.k8s.io "resource-metrics-auth-reader" deleted
clusterrole.rbac.authorization.k8s.io "prometheus-k8s" deleted
clusterrolebinding.rbac.authorization.k8s.io "prometheus-k8s" deleted
rolebinding.rbac.authorization.k8s.io "prometheus-k8s" deleted
role.rbac.authorization.k8s.io "prometheus-k8s" deleted
This experiment is likely proving the idea if your namespace is stuck in Terminated
state there are always resources left referring it and preventing it to get deleted. The easiest (and correct) way to clean it up is using the same instrumentation as when creating it (kubectl apply, Helm etc).
Looks like this is a known issue with people having mixed results trying a mix of different things:
kubectl delete ns <name> --grace-period=0 --force
Some more background but at the pod level here too.
For me, deletion with --grace-period=0 --force
has never worked. Rico's answer is good, but probably you can do it without restarting your cluster.
In my case, there are ALWAYS some objects which were recreated after you have "deleted" your namespace.
To see which Kubernetes resources are and aren’t in a namespace:
kubectl api-resources --namespaced=true
kubectl api-resources --namespaced=false
What I am doing is to go through it and find all k8s objects which were in some use of that specific namespace, and delete them manually.
EDIT: Another useful command for finding objects that should be deleted:
kubectl api-resources --verbs=list --namespaced -o name \
| xargs -n 1 kubectl get --show-kind --ignore-not-found -l <label>=<value> -n <namespace>
The only solution that worked for me was:
kubectl get namespace annoying-namespace-to-delete -o json > tmp.json
edit tmp.json and remove"kubernetes"
from "spec": { "finalizers":[]}
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json https://kubernetes-cluster-ip/api/v1/namespaces/annoying-namespace-to-delete/finalize
and this should delete your namespace,