I am setting up a container through Google cloud platform (GCP) kubernetes engine. I have a requirement to mount multiple volumes as the containers are created that way. These volume have to be persistent and hence I went with an NFS approach. I have a VM where NFS service is running and it exports couple of directories.
I am giving yaml sample files below.
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-branch
labels:
component: myapp-branch
spec:
template:
metadata:
labels:
component: myapp-branch
spec:
imagePullSecrets:
- name: myprivatekey
containers:
- name: myapp-branch
image: mydockerrepo/myapp/webapp:6.6
command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 100 ; done"]
env:
- name: myapp_UID
value: "1011"
- name: myapp_GID
value: "1011"
- name: myapp_USER
value: "myapp_branch"
- name: myapp_XMS_G
value: "1"
- name: myapp_XMX_G
value: "6"
volumeMounts:
- mountPath: /mypath1/path1
name: pvstorestorage
- mountPath: /mypath2/path2
name: mykeys
volumes:
- name: pvstorestorage
persistentVolumeClaim:
claimName: standalone
- name: mykeys
persistentVolumeClaim:
claimName: conf
PVAndPVC.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: standalone
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.2.1.6
path: "/exports/path1"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: standalone
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: conf
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.2.1.6
path: "/exports/path2"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: conf
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
after applying them, I see that both the volume mounts of container (/mypath1/path1 and /mypath2/path2) are mounted to same mount point of nfs (/exports/path2, the second one). This is happening with persistentVolumeClaim, I tried EmptyDir, its working fine. If any one tried this approach and know the solution, it would be really helpful.
You must add a rule in your PVC (PersistentVolumeClaim) definitions to make them match their correct respective PV (PersistentVolume).
Having the same name is not enough.
Change your PV and PVC definitions into something like (untested) :
apiVersion: v1
kind: PersistentVolume
metadata:
name: standalone
labels:
type: standalone
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.2.1.6
path: "/exports/path1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: conf
labels:
type: conf
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.2.1.6
path: "/exports/path2"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: standalone
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
selector:
matchLabels:
type: standalone
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: conf
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
selector:
matchLabels:
type: conf
(typically, I added a metadata.labels.type in PVs and a spec.selector.matchLabels in PVCs)
Also, use kubectl get pv
and kubectl get pvc
to see how it is working and ease debugging