I want to create a replication controller with a POD which will have a PVC (persistent volume claim). My PVC will use an NFS storage for PV(Persistent Volume).
Once the POD is operational the RC would maintain the PODs up and running. In this situation would the data in the POD be available / persistent when
Depending on your provider/provisioner the persistentVolumeReclaimPolicy: Retain
is not necessarily a "come back and get me!" process. Per Kubernetes documentation this policy is designed to prevent the deletion of the volume so that you can recover your data (outside of Kubernetes) at a later time.
Here is what it looks like with this policy in play:
$ kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/nfs-server Bound nfs-server 100Gi RWX 5d
persistentvolumeclaim/nfs-server-wp-k8specs Bound nfs-server-wp-k8specs 100Gi RWX 2d
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nfs-server 100Gi RWX Retain Bound default/nfs-server 5d
persistentvolume/nfs-server-wp-k8specs 100Gi RWX Retain Bound default/nfs-server-wp-k8specs 2d
that depends a lot on how you define your PV/PVC. From my experience it is pretty easy to use NFS based PV to retain data between pod recreations deletions. I go with following approach for NFS volume shared by multiple pods.
Volume :
apiVersion: v1
kind: PersistentVolume
metadata:
name: pvname
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: <nfs IP>
path: <nfs path>
Claim :
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvcname
spec:
volumeName: pvname
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
This ensured that whatever I delete in k8s I can get back to my data on a known path on NFS server as well as reuse it again by recreating PV/PVC/POD on k8s, hence it should survive all three cases you mentioned.