Container Data in Persistent Volume Claim when POD crashes

9/26/2016

I want to create a replication controller with a POD which will have a PVC (persistent volume claim). My PVC will use an NFS storage for PV(Persistent Volume).

Once the POD is operational the RC would maintain the PODs up and running. In this situation would the data in the POD be available / persistent when

  1. the POD is stopped/deleted by a delete command and RC re-launches it? That means Kubernetes was not shutdown. In this case can the new POD have the same data from the same volume?
  2. the POD was stopped, Kubernetes process and the nodes were restarted. The NFS storage however was still attached as PV.
  3. A new PV is attached to Kubernetes and the old PV is detached.
-- Santanu Dey
kubernetes
persistent-volumes

2 Answers

11/1/2018

Depending on your provider/provisioner the persistentVolumeReclaimPolicy: Retain is not necessarily a "come back and get me!" process. Per Kubernetes documentation this policy is designed to prevent the deletion of the volume so that you can recover your data (outside of Kubernetes) at a later time.

Here is what it looks like with this policy in play:

$ kubectl get pvc,pv

NAME                                          STATUS    VOLUME                  CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/nfs-server              Bound     nfs-server              100Gi      RWX                           5d
persistentvolumeclaim/nfs-server-wp-k8specs   Bound     nfs-server-wp-k8specs   100Gi      RWX                           2d

NAME                                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                           STORAGECLASS   REASON    AGE
persistentvolume/nfs-server              100Gi      RWX            Retain           Bound     default/nfs-server                                       5d
persistentvolume/nfs-server-wp-k8specs   100Gi      RWX            Retain           Bound     default/nfs-server-wp-k8specs                            2d
-- yomateo
Source: StackOverflow

9/26/2016

that depends a lot on how you define your PV/PVC. From my experience it is pretty easy to use NFS based PV to retain data between pod recreations deletions. I go with following approach for NFS volume shared by multiple pods.

Volume :

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvname
spec:
  capacity:
    storage: 1Mi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: <nfs IP>
    path: <nfs path>

Claim :

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvcname
spec:
  volumeName: pvname
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

This ensured that whatever I delete in k8s I can get back to my data on a known path on NFS server as well as reuse it again by recreating PV/PVC/POD on k8s, hence it should survive all three cases you mentioned.

-- Radek 'Goblin' Pieczonka
Source: StackOverflow