I'm trying to create a dynamic Azure Disk volume to use in a pod that has specific permissions requirements.
The container runs under the user id 472, so I need to find a way to mount the volume with rw permissions for (at least) that user.
With the following StorageClass defined
apiVersion: storage.k8s.io/v1
kind: StorageClass
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Delete
volumeBindingMode: Immediate
metadata:
  name: foo-storage
mountOptions:
  - rw
parameters:
  cachingmode: None
  kind: Managed
  storageaccounttype: Standard_LRSand this PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: foo-storage
  namespace: foo
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: foo-storage
  resources:
    requests:
      storage: 1GiI can run the following in a pod:
containers:
  - image: ubuntu
    name: foo
    imagePullPolicy: IfNotPresent
    command:
      - ls
      - -l
      - /var/lib/foo
    volumeMounts:
      - name: foo-persistent-storage
        mountPath: /var/lib/foo
volumes:
  - name: foo-persistent-storage
    persistentVolumeClaim:
      claimName: foo-storageThe pod will mount and start correctly, but kubectl logs <the-pod> will show
total 24
drwxr-xr-x 3 root root  4096 Nov 23 11:42 .
drwxr-xr-x 1 root root  4096 Nov 13 12:32 ..
drwx------ 2 root root 16384 Nov 23 11:42 lost+foundi.e. the current directory is mounted as owned by root and read-only for all other users.
I've tried adding a mountOptions section to the StorageClass, but whatever I try (uid=472, user=472 etc) I get mount errors on startup, e.g.
mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m1019941199 --scope -- mount -t ext4 -o group=472,rw,user=472,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m1019941199
Output: Running scope as unit run-r7165038756bf43e49db934e8968cca8b.scope.
mount: wrong fs type, bad option, bad superblock on /dev/sdc,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail or so.
I've also tried to get some info from man mount, but I haven't found anything that worked.
How can I configure this storage class, persistent volume claim and volume mount so that the non-root user running the container process gets access to write (and create subdirectories) in the mounted path?
You need to define the securityContext of your pod spec like the following, so it matches the new running user and group id:
securityContext:
  runAsUser: 472
  fsGroup: 472The stable Grafana Helm Chart also does it in the same way. See securityContext under Configuration here: https://github.com/helm/charts/tree/master/stable/grafana#configuration