I'm trying to mount NFS server (it is set up in Azure Virtual Machine) inside the Kubernetes cluster (AKS).
Basically I fallowed this tutorial: https://docs.microsoft.com/en-us/azure/aks/azure-nfs-volume
All seemed to be good so far. I've tested the connection from pod to the NFS machine with telnet telnet IP_ADDRESS_NFS_MACHINE 111 and telnet IP_ADDRESS_NFS_MACHINE 2049. Telnet connected.
But for some reason I'm getting error when the pod is starting:
MountVolume.SetUp failed for volume "aks-nfs" : mount failed: exit status 32
And the most important part:
Output: mount.nfs: access denied by server while mounting IP_ADDRESS:/export/data
I guess it is permissions issue. I tried to use securityContext in deployment k8s object. I've set fsGroup to 33 as I thought that because my application is running under apache server that would be good GID attached. I'm not sure about this part though. 
The exported part from the NFS server /export/data - I chowned it to www-data:www-data and it has 777 in terms of chmod. But still the issue persists.
Here's my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: web
spec:
  replicas: 1
  selector:
    matchLabels:
      component: web
  template:
    metadata:
      labels:
        component: web
    spec:
      securityContext:
        fsGroup: 33
      containers:
        - name: web
          image: the_image
          ports:
            - containerPort: 80
          volumeMounts:
            - name: nfs-pv
              mountPath: /var/www/html/storage
      volumes:
        - name: nfs-pv
          persistentVolumeClaim:
            claimName: nfs-pvcMy pvc:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      type: nfsAnd pv:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: aks-nfs
  labels:
    type: nfs
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: xx.xxx.xx.xx
    path: /export/dataMy exports config file:
/export        10.240.0.0/16(rw,async,insecure,fsid=0,crossmnt,no_subtree_check)
/export        localhost(rw,async,insecure,fsid=0,crossmnt,no_subtree_check)I'm not sure actually what is the process owner who connected with the mounted NFS. Thank you for helping in advance.
The pv definition is not correct.
You don't have to define that export. 
Modify the pv as per this:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: aks-nfs
  labels:
    type: nfs
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: xx.xxx.xx.xx
    path: /exportAnd also on pvc use the same name defined in pv:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: aks-nfs 
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      type: nfsOnce modified the files, reapply them with kubectl.