So far I was convinced that one need a PVC to access a PV like in this example from k8s doc:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
But then I saw in Docker doc that one can use the following syntax (example using nfs):
kind: Pod
apiVersion: v1
metadata:
name: nfs-in-a-pod
spec:
containers:
- name: app
image: alpine
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs # Please change the destination you like the share to be mounted too
command: ["/bin/sh"]
args: ["-c", "sleep 500000"]
volumes:
- name: nfs-volume
nfs:
server: nfs.example.com # Please change this to your NFS server
path: /share1 # Please change this to the relevant share
I am confused:
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. You dont need pv and pvc for emptyDIr volume.
Note that when a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
If you want to retain the data even if the pod crashes or restarts or the pod is deleted or undeployed then you need to use pv and pvc
Look at another example below, where you dont need pv and pvc using hostPath
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
If you need to store the data on external storage solutions like nfs, azure file storage, aws EBS, google persistentDisk etc then you need to create pv and pvc.
mounting pv directly to a pod is not allowed and is against the kubernetes design principles. It would cause tight coupling below the pod vloume and the underlysing storage.
pvc enables light coupling between the pod and the persistent volume. The pod doesnt know what the underlying storage is used to store the container data and is not necessary for the pod to know that info.
pv and pvc are required for static and dynamic provisioning of storage volumes for work loads in kubernetes cluster
The various kinds of things you can mount are part of the Volume object in the Kubernetes API (which is part of a PodSpec, which is part of a Pod). None of these are an option to mount a specific PersistentVolume by name.
(There are some special cases you can see there for things like NFS and various clustered storage systems. Those mostly predate persistent volumes.)
The best you can do here is to create a PVC that's very tightly bound to a single persistent volume, and then reference that in the pod spec.