What to do with Released persistent volume?

6/3/2018

TL;DR. I'm lost as to how to access the data after deleting a PVC, as well as why PV wouldn't go away after deleting a PVC.

Steps I'm taking:

  1. created a disk in GCE manually:

    gcloud compute disks create --size 5Gi disk-for-rabbitmq --zone europe-west1-b
  2. ran:

    kubectl apply -f /tmp/pv-and-pvc.yaml

    with the following config:

    # /tmp/pv-and-pvc.yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-for-rabbitmq
    spec:
      accessModes:
      - ReadWriteOnce
      capacity:
        storage: 5Gi
      gcePersistentDisk:
        fsType: ext4
        pdName: disk-for-rabbitmq
      persistentVolumeReclaimPolicy: Delete
      storageClassName: standard
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-for-rabbitmq
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
      storageClassName: standard
      volumeName: pv-for-rabbitmq
  3. deleted a PVC manually (on a high level: I'm simulating a disastrous scenario here, like accidental deletion or misconfiguration of a helm release):

    kubectl delete pvc pvc-for-rabbitmq

At this point I see the following:

$ kubectl get pv
NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                      STORAGECLASS   REASON   AGE
pv-for-rabbitmq   5Gi        RWO            Delete           Released   staging/pvc-for-rabbitmq   standard                8m
$

A side question, just improve my understanding: why PV is still there, even though it has a reclaim policy set to Delete? Isn't this what the docs say for the Delete reclaim policy?

Now if I try to re-create the PVC to regain access to the data in PV:

$ kubectl apply -f /tmp/pv-and-pvc.yaml
persistentvolume "pv-for-rabbitmq" configured
persistentvolumeclaim "pvc-for-rabbitmq" created
$

I still get this for pvs, e.g. a PV is stuck in Released state:

$
kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                             STORAGECLASS   REASON    AGE
pv-for-rabbitmq                            5Gi        RWO            Delete           Released   staging/pvc-for-rabbitmq          standard                 15m
$

...and I get this for pvcs:

$
kubectl get pvc
NAME               STATUS    VOLUME            CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-for-rabbitmq   Pending   pv-for-rabbitmq   0                         standard       1m
$

Looks like my PV is stuck in Released status, and PVC cannot access the PV which is not in Available status.

So, why the same PV and PVC cannot be friends again? How do I make a PVC to regain access to data in the existing PV?

-- gmile
google-cloud-platform
google-cloud-storage
google-kubernetes-engine
kubernetes

4 Answers

6/15/2018

You are running in the typical misconception thinking PV and PVC are more related then they are.

Persistent Volume: In K8s, this resource has lots of options. For example hostPath will reserve the specified size from the node on which the pod is running, and will map it to the desired path on both; your pod and your node.

Persistent Volume Claim: PVC, especially on GKE, will create a physical Persistent Disk on Google Cloud Platform, and will attach it to the node on which the pod is running, as secondary disk. So, the claim is more Cloud Provider specific.

Note: you don't need to manually create the disk. Just create the claim, and check on what happens. You will have to give it some time, but eventually the status should be BOUND, which means the Persistent Disk is created, attached and ready to use.

If you do df -h, it will appear as attached device, and it also will appear with kubectl get pv, as in the end it is a persistent volume

About deleting things. When you delete a PV or a PVC, nothing happens. You still can get into the pod and go to the path that was mapped. No problem. As it doesn't get deleted, the pod still has access to it. Now if your pod goes down and get's re-created, without that PV or PVC, you will get an error.

-- suren
Source: StackOverflow

12/19/2019
kubectl patch pv pv-for-rabbitmq -p '{"spec":{"claimRef": null}}'

This worked for me.

-- Bharat Chhabra
Source: StackOverflow

8/14/2019

official documentation has answer, hopefully it helps other looking for same (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes)

Retain The Retain reclaim policy allows for manual reclamation of the resource. When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered “released”. But it is not yet available for another claim because the previous claimant’s data remains on the volume. An administrator can manually reclaim the volume with the following steps.

  1. Delete the PersistentVolume. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted.
  2. Manually clean up the data on the associated storage asset accordingly.
  3. Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new PersistentVolume with the storage asset definition.
-- Deepika Pandhi
Source: StackOverflow

6/15/2018

The phrase that says, Pods consume node resources and PVCs consume PV resources may be useful to fully understand the theory and friendship between PV and PVC.

I have attempted a full reproduction of the behavior noted using the provided YAML file and failed and it returned an expected result. Hence, before providing any further details, here is a walk-through of my reproduction.

Step 1: Created PD in Europe-west1 zone

sunny@dev-lab:~$ gcloud compute disks create --size 5Gi disk-for-rabbitmq --zone europe-west1-b

WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O 
performance. For more information, see: 

NAME               ZONE            SIZE_GB  TYPE         STATUS
disk-for-rabbitmq  europe-west1-b  5        pd-standard  READY

Step 2: Create a PV and PVC using the project YAML file

sunny@dev-lab:~$  kubectl apply -f pv-and-pvc.yaml

persistentvolume "pv-for-rabbitmq" created
persistentvolumeclaim "pvc-for-rabbitmq" created

Step 3: List all the available PVC

sunny@dev-lab:~$ kubectl get pvc
NAME               STATUS    VOLUME            CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-for-rabbitmq   Bound     pv-for-rabbitmq   5Gi        RWO            standard       16s

Step 4: List all the available PVs

sunny@dev-lab:~$ kubectl get pv
NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                      STORAGECLASS   REASON    AGE
pv-for-rabbitmq   5Gi        RWO            Delete           Bound     default/pvc-for-rabbitmq   standard                 28s

Step 5: Delete the PVC and verify the result

sunny@dev-lab:~$  kubectl delete pvc pvc-for-rabbitmq
persistentvolumeclaim "pvc-for-rabbitmq" deleted

sunny@dev-lab:~$  kubectl get pv

No resources found.

sunny@dev-lab:~$  kubectl get pvc

No resources found.

sunny@dev-lab:~$  kubectl describe pvc-for-rabbitmq

the server doesn't have a resource type "pvc-for-rabbitmq"

As per your question

A side question, just improve my understanding: why PV is still there, even though it has a reclaim policy set to Delete? Isn't this what the docs say for the Delete reclaim policy?

You are absolutely correct, as per documentation when a user is done with their volume, they can delete the PVC objects from the API which allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with the volume after it has been released of its claim. In your YAML it was set to:

Reclaim Policy:  Delete

which means that it should have been deleted immediately. Currently, volumes can either be Retained, Recycled or Deleted.

Why wasn't it deleted? The only thing I could think of, would be maybe the PV was somehow still claimed, which is likely as a result of the PVC not successfully deleted as its capacity is showing "0" and to fix this you will need to delete the POD. Alternatively, you may use the $ kubectl describe pvc command to see why the PVC is still in a pending state.

And for the question, How do I make a PVC to regain access to data in the existing PV?

This is not possible because of the status of reclaim policy i.e. Reclaim Policy: Delete to make this possible you would need to use the Retain option instead as per documentation

To validate the theory that you can delete PVC and keep the disk, do the following:

  • Change the reclaim policy to Retain
  • Delete the PVC
  • Delete the PV

And then verify if the disk was retained.

-- arp-sunny.
Source: StackOverflow