Mounting External NFS share on Pod and permission denied to access files

8/4/2021

I have tried to read all the questions and answers in Stack Overflow, and doing a lot of googling, ask some of my Kubernetes Guru around me, but to not available... I am becoming crazy with that problem...

Here is my problem, we have several environment, with different tenant, and each of them has an NFS server (on AIX, Solaris, Linux, Windows,... depending on the tenant). And want to mount the NFS share on our Kubernetes deployement on a specific POD.

For now, that works, we can mount the NFS share, with NFS V4. And that for everyone of our external NFS servers.

I am using that Kubernetes Provisioner (https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner) and that works.

Here are my configuration to make it work:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-xy-provisioner
  labels:
    app: nfs-xy-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-xy-provisioner
  template:
    metadata:
      labels:
        app: nfs-xy-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-xy-provisioner
          image: XYZ/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-xxx-xy-provisioner
            - name: NFS_SERVER
              value: myServer.example.com
            - name: NFS_PATH
              value:  /my/path/xy_eingang
      volumes:
        - name: nfs-client-root
          nfs:
            server: myServer.example.com
            path: /my/path/xy_eingang

With the following StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-xxx-xy-nfs-storage
provisioner: k8s-sigs.io/nfs-xxx-xy-provisioner
parameters:
  pathPattern: ""
  archiveOnDelete: "false"
reclaimPolicy: Retain

with the following pv claim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-xy-pvc
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-xxx-xy-nfs-storage"
spec:
  storageClassName: "managed-xxx-xy-nfs-storage"
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

The mount in pod:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-deployment
spec:
  template:
    spec:
      volumes:
        - name: app-xy
          persistentVolumeClaim:
            claimName: app-xy-pvc    
      containers:
        - name: app
          volumeMounts:
            - name: app-xy
              mountPath: /my/pod/path/xy

Here is the mount

myServer.example.com:/my/path/xy_eingang on /my/pod/path/xy type nfs4 (rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=1.2.3.4,local_lock=none,addr=2.3.4.5)

Now when I am on the mounted path, I can see the following:

drwxrws--- 3 65534 4294967294  73728 Jul  2 07:52 33
drwxrws--- 2 65534 4294967294  69632 Jul  2 07:52 44
drwxrws--- 2 65534 4294967294  90112 Jul  2 07:52 55
-rwxrws--- 1 65534 4294967294 630793 Oct 20  2014 5905001_00001.ZIP

So we have the UID=65534 and GID=4294967294. I've tried to change the fsgroup or supplementalGroups to 4294967294, but kubernetes is complaining, that it can only use number from 0 to 2147483647 inclusive.

On our NFS Server (in that example), we have the following user/group:

  • User: usernfs (UID=56008)
  • Group: usernfs (GID=56001)

And this nfs mapping is not done, and in the pod, since it is only the application that is run, the idmapd is not started. From what I've understood, the mount is done on the node, and then in the pod we have only the mount from the node.

And we are not owner of the kubernetes installation, we are simple users, we have no possibilities to change anything on the Kubernetes configuration/nodes, etc... We are simple "users" using the Kubernetes functionalities to deploy our application. We cannot use Helm, the only thing we can use is Kustomize.

We cannot change the permissions on the NFS server to 777/644/744/666 or such things for security reasons. So all the advice to change the permission on the share disk are not working for us.

I've tried to change to NFS V3, but also there from a security point of view, our security team doesn't want to use such an old protocol, so we must use NFS V4.

I know that for NFS V4, we need to have idmapd running, but I have no idea, where we need to have it, on the node, the pod, somewhere else? No idea, I am quite new to Kubernetes, and things I could do in minutes are taking me weeks to do (like this problem) and I cannot find the way to solve that problem.

So any help would be welcome to solve that permission problem...

The version of Kubernetes is the following:

Client Version: version.Info{
  Major:"1", 
  Minor:"18", 
  GitVersion:"v1.18.12",
  GitCommit:"7cd5e9086de8ae25d6a1514d0c87bac67ca4a481",
  GitTreeState:"clean",
  BuildDate:"2020-11-12T09:18:55Z",
  GoVersion:"go1.13.15",
  Compiler:"gc",
  Platform:"linux/amd64"
}

Server Version: version.Info{
  Major:"1",
  Minor:"19",
  GitVersion:"v1.19.9+vmware.1",
  GitCommit:"f856d899461199c512c21d0fdc67d49cc70a8963",
  GitTreeState:"clean", BuildDate:"2021-03-19T23:57:11Z",
  GoVersion:"go1.15.8",
  Compiler:"gc",
  Platform:"linux/amd64"
}

Kind regards, Alessandro

-- Alessandro Perucchi
kubernetes
kustomize
nfs

1 Answer

8/4/2021

I know how frustrated it is, I have used this on centos 8, Ubuntu 18, 20 on baremetal and digital ocean, and we have to install nfs tools on host server, than it worked like a charm. We even not have to touch user security uuid etc.

-- Siddique Ahmad
Source: StackOverflow