The stateful set es-data was failing on our test environment and I was asked to delete corresponding PV.
So I deleted the following for es-data:
Is this correct if you wanted to delete the PV?
You can delete the PV using following two commands:
kubectl delete pv <pv_name> --grace-period=0 --force
And then deleting the finalizer using:
kubectl patch pv <pv_name> -p '{"metadata": {"finalizers": null}}'
It worked for me if I first delete the pvc, then the pv
kubectl delete pvc data-p-0
kubectl delete pv <pv-name> --grace-period=0 --force
Assuming one wants to delete the pvc as well, seems to hang otherwise
Most answers on this thread simply mention the commands without explaining the root cause.
Here is a diagram to help understand better. refer to my other answer for commands and additional info -> https://stackoverflow.com/a/73534207/6563567
This diagram shows how to clean delete a volume

In your case, the PVC and PV are stuck in terminating state because of finalizers. Finalizers are guard rails in k8s to avoid accidental deletion of resources.
Your observations are correct and this is how Kubernetes works. But the order in which you deleted the resources are incorrect.
This is what happened,
PV is stuck terminating because PVC still exists. PVC is stuck terminating because Statefulsets(pods) are still using the volumes. (volumes are attached to the nodes and mounted to the pods). As soon as you deleted the pods/STS, since volumes are no more in use, PVC and PV got successfully removed.
Firstly run
kubectl patch pv {PVC_NAME} -p '{"metadata":{"finalizers":null}}'
then run
kubectl delete pv {PVC_NAME}At the beginning be sure that your
Reclaim Policy is set up to Delete. After PVC is deleted, PV should be deleted.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
If it doesn't help, please check this [closed] Kubernetes PV issue: https://github.com/kubernetes/kubernetes/issues/69697
and try to delete the PV finalizers.
HINT: PV volumes may be described like
pvc-name-of-volume which may be confusing!
Persistent VolumePersistent Volume ClameFirst find the pvs:
kubectl get pv -n {namespace}
Then delete the pv in order set status to
Terminating
kubectl delete pv {PV_NAME}
Then patch it to set the status of pvc to
Lost:
kubectl patch pv {PV_NAME} -p '{"metadata":{"finalizers":null}}'
Then get pvc volumes:
kubectl get pvc -n storage
Then you can delete the pvc:
kubectl delete pvc {PVC_NAME} -n {namespace}
** Lets say we have kafka installed in
storage namespace
$ kubectl get pv -n storage
$ kubectl delete pv pvc-ccdfe297-44c9-4ca7-b44c-415720f428d1
$ kubectl get pv -n storage (hanging but turns pv status to terminating)
$ kubectl patch pv pvc-ccdfe297-44c9-4ca7-b44c-415720f428d1 -p '{"metadata":{"finalizers":null}}'
$ kubectl get pvc -n storage
kubectl delete pvc data-kafka-0 -n storageFor me I followed this method and it worked fine for me.
kubectl delete pv {your-pv-name} --grace-period=0 --force
After that edit the pvc configuration.
kubectl edit pvc {your-pvc-name}
and remove finalizer from pvc configuration.
finalizers: - kubernetes.io/pv-protection
You can read more about finalizer here in official kubernetes guide.
kubectl delete pv [pv-name]
ksu you have to check about the policy of PV it should not be Reclaim Policy.