I have one NFS mount containing some read only media assets that I want to present to multiple projects.
Creating a new PV in each project with the same NFS path seems too clunky. What if other PVCs were to claim my asset directory by accident?
Other than that I've got no clue how to do this. How can I accomplish this?
edit: To be clear - I want to avoid cluster-admin intervention. Cluster admin rights are required when creating a PV.
PV CONFIG
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: null
labels:
app: my_app
name: my-assets
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 25Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: my-assets
namespace: my_namespace
resourceVersion: "13480134"
uid: ea36d352-1a22-11e7-a443-0050568b4a96
nfs:
path: /nfs_volume
server: nfs_server
persistentVolumeReclaimPolicy: Recycle
status: {}
PVCs from namespaces other than my_namespace cannot claim against this pv. Here's a PVC config from a different namespace that is unable to claim against the existing PV with ReadWriteMany
.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
name: my-assets
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 25Gi
selector:
matchLabels:
app: my_app
volumeName: my-assets
status: {}
You just need to list ReadWriteMany
as an access mode in the PV definition and in PVCs as well.
There is an example available here: https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs
I am not sure what you mean by project, but if you are referring to Deployments
of different apps it should work with a single PV definition for a NFS that has type ReadWriteMany
. However I would recommend to always include one PV and PVC definition per deployment that requires access to the NFS. That way it is explicit from the deployment and you can change it for every app separately. Just imagine you want to change it for one app but not the other.
Here is an example that I use for an amazon EFS NFS mounting to all PODs in my CockroachDB deployment for writing backups. I have it split up into 2 yamls but you can also collapse them into one file. Note that you can use the same PersistentVolumeClaim for all PODs.
1 cockroachdbPV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: cockroachdbpv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
server: {amazon path here}
path: "/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cockroachdbpv
spec:
accessModes:
- "ReadWriteMany"
resources:
requests:
storage: 10Gi
2 cockroachdb.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: cockroachdb
spec:
serviceName: "cockroachdb"
replicas: 3
template:
metadata:
labels:
app: cockroachdb
annotations:
{...}
spec:
containers:
- name: cockroachdb
{...}
volumes:
{...}
- name: efsdir
persistentVolumeClaim:
claimName: cockroachdbpv