I can't delete this Stateful Set in Kubernetes, even with --cascade=false
so it doesn't delete the Pods managed by it.
kubectl get statefulsets
NAME DESIRED CURRENT AGE
assets-elasticsearch-data 0 1 31m
Then:
kubectl delete statefulsets assets-elasticsearch-data
^C
... hangs for minutes until I give up, then:
kubectl delete statefulsets assets-elasticsearch-data --cascade=false
statefulset "assets-elasticsearch-data" deleted
kubectl get statefulsets
NAME DESIRED CURRENT AGE
assets-elasticsearch-data 0 1 32m
I'm using Google's GKE.
in my case i was using an old version of kubectl.
i instaled a recent one on centos via yum and the problem is solved, i am able to delete the stalled statefulset
vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=0
repo_gpgcheck=0
If the mentioned force flags don't work, I would suggest looking at the yaml metadata block and delete any existing finalizers as well as setting blockOwnerDelete to false, then retry deleting.
Had a similar issue with k8s 1.8. Tried many times and it timedout. Eventually I tried,
kubectl delete statefulsets mariadb -n openstack --force
error: timed out waiting for "mariadb" to be synced
This appears to work :
kubectl delete statefulsets mariadb -n openstack --force --grace-period=0 --cascade=false
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
statefulset "mariadb" deleted
I could reproduce the bug twice with Kubernetes 1.7.3
and, after destroying the cluster for the 3rd time and downgrading to Kubernetes 1.6.7
, I had no problem deleting Stateful Sets
or Helm deployments (Elasticsearch Helm chart in my case).
Try the delete action again with --grace-period=0
and --force
.