Reuse PV in Deployment

7/5/2021

What I need? A deployment with 2 PODs which read from the SAME volume (PV). The volume must be shared between PODS in a RW mode.

Note: I already have a rook ceph with a defined storageClass "rook-cephfs" which allow this capability. This SC also has Retain Policy

This is what I did:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-nginx
spec:
  accessModes:
    - "ReadWriteMany"
  resources:
    requests:
      storage: "10Gi"
  storageClassName: "rook-cephfs"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      serviceAccountName: default
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: Always
        ports:
          - name: http
            containerPort: 80
        volumeMounts:
          - name: pvc-data
            mountPath: /data
      volumes:
      - name: pvc-data
        persistentVolumeClaim:
          claimName: data-nginx

It works! Both nginx containers shares the volume.

Problem: If a delete all the resources (except the PV) and a recreate them, a NEW PV is created instead of reuse the old one. So basically, the new volume is empty.

The OLD PV get the status "Released" instead of "Available"

I realized that if a apply a patch to the PV to remove the claimRef.uid :

kubectl patch pv $PV_NAME --type json -p '[{"op": "remove", "path": "/spec/claimRef/uid"}]'

and then redeploy it works. But I don't want to do this manual step. I need this automated.

I also tried the same configuration with a statefulSet and got the same problem.

Any solution?

-- mrk
kubernetes
persistent-volume-claims
persistent-volumes

2 Answers

7/5/2021

Make sure to use reclaimPolicy: Retain in your StorageClass. It will tell Kubernetes to reuse the PV.

Ref: https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/

-- Emruz Hossain
Source: StackOverflow

7/6/2021

But I don't want to do this manual step. I need this automated.

Based on the official documentation, it is unfortunately impossible. First look at the Reclaim Policy:

PersistentVolumes that are dynamically created by a StorageClass will have the reclaim policy specified in the reclaimPolicy field of the class, which can be either Delete or Retain. If no reclaimPolicy is specified when a StorageClass object is created, it will default to Delete.

So, we have 2 supported options for Reclaim Policy: Delete or Retain.

Delete option is not for you, because,

for volume plugins that support the Delete reclaim policy, deletion removes both the PersistentVolume object from Kubernetes, as well as the associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume. Volumes that were dynamically provisioned inherit the reclaim policy of their StorageClass, which defaults to Delete. The administrator should configure the StorageClass according to users' expectations; otherwise, the PV must be edited or patched after it is created.

Retain option allows you for manual reclamation of the resource:

When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps. 1. Delete the PersistentVolume. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted. 2. Manually clean up the data on the associated storage asset accordingly. 3. Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new PersistentVolume with the storage asset definition.

-- Mikołaj Głodziak
Source: StackOverflow