Share nfs volume between kubernetes clusters

6/26/2018

We have the setup in GKE with two different clusters. One cluster is running a nfs-server and on that cluster we have a persistent-volume which points to the server. This PV is then mounted on a pod running on this cluster. The second cluster also has a PV and a pod that should mount the same nfs volume. Here is where the problem occurs. Where we point out the server it does not work with using the nfs-server clusterIp address. This is understandable but I wonder how to best achieve this.

The setup is basically this:

Persistent Volume and Persistent Volume Claim used by NFS

apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv spec: capacity: storage: 20Gi storageClassName: manual accessModes: - ReadWriteMany gcePersistentDisk: pdName: files

fsType: ext4

apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 20Gi

NFS server deployment

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nfs-server spec: replicas: 1 selector: matchLabels: role: nfs-server template: metadata: labels: role: nfs-server spec: containers: - name: nfs-server image: gcr.io/google_containers/volume-nfs:0.8 ports: - name: nfs containerPort: 2049 - name: mountd containerPort: 20048 - name: rpcbind containerPort: 111 securityContext: privileged: true volumeMounts: - mountPath: /exports name: mypvc volumes: - name: mypvc persistentVolumeClaim: claimName: nfs-pvc

NFS-server service

apiVersion: v1 kind: Service metadata: name: nfs-server spec: ports: - name: nfs port: 2049 - name: mountd port: 20048 - name: rpcbind port: 111 selector: role: nfs-server

Persistent volume and Persistent Volume Claim used by the pods:

apiVersion: v1 kind: PersistentVolume metadata: name: nfs spec: capacity: storage: 20Gi storageClassName: manual accessModes: - ReadWriteMany nfs: server: 10.4.0.20 path: "/"


kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nfs spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 20Gi

Part of deployment file for pod mounting the nfs

  volumes:
    - name: files
      persistentVolumeClaim:
        claimName: nfs

Output of kubectl get pv and kubectl get pvc

user@HP-EliteBook:~/Downloads$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs 100Gi RWX Retain Bound default/nfs manual 286d nfs-pv 100Gi RWO Retain Bound default/nfs-pvc manual 286d user@HP-EliteBook:~/Downloads$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs Bound nfs 100Gi RWX manual 286d nfs-pvc Bound nfs-pv 100Gi RWO manual 286d

The ip in the PV used by the pods is the problem. The pod on the same cluster can connect to it but not the pod on the other cluster. I can use the actual podIP from the other cluster but the podIP changes with every deploy so that is not a working solution. What is the best way to get around this problem, I only want this second cluster to have access to the nfs server and not opening it to the world for example.

-- katla
google-kubernetes-engine
kubernetes
nfs
persistent-volumes

0 Answers