kubernetes use nfs persistent volumes with a root user in a pod

11/23/2018

ok I am banging my head now agaainst he wall for serveral days...

my usecase: I am on my own bare metal cloud, I am running ubuntu machines and setup kubernetes on 4 machines one master 3 workers. I created a private registry and cert-manager etc etc.

also nfs shares are on the worker nodes

i have a piece of software that has to run as root inside a pod now i want this root user to store data on a presistent volume on a nfs share.

root_squash is biting me in the but...

I have created volumes and claims and all works ok if i am not root inside the pod. when root the files on the nfs shares are squashed to nobody:nogroup and the root user inside the pod can no longer use them...

what to do?

1) export nfs share with the no_root_squash option but this seems like a very bad idea given security issues, not sure if this can be mitigated by firewall rules alone?

2) i trid all kinds of securityContext options for fsGroup and uid en gid mount options, all work ok as long as you are not root in de pod... but i am not sure if i understand this in full so

My pc yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: s03-pv0004
  annotations:
    pv.beta.kubernetes.io/gid: "1023"
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /data/k8s/pv0004
    server: 212.114.120.61

As you van see i created a dedicated nfsuser with uid 1023 and use this to make the pods store data as this user... works fine as long as i am not rooot inside the pods...

The pods i am running are MarkLogic pods in a stateful set like so:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: marklogic
  namespace: default
spec:
  selector:
    matchLabels:
      app: marklogic
  serviceName: "ml-service"
  replicas: 3
  template:
    metadata:
      labels:
        app: marklogic
    spec:
      securityContext:
        fsGroup: 1023
... more

runAsUser: 1023 works but again not if i want to be root inside the pod...

My question: Can it be done, run a pod as root and still use nfs as persistent volume with a secure nfs share(that is not using no_root_squash) ???

or do i need to drop the idea of nfs and move to an alternative like glusterfs?

-- Hugo Koopmans
kubernetes
nfs
persistent-volumes
root

1 Answer

11/30/2018

I have moved from nfs storage to a local storage claim option in kubernetes. This can be annotated so a pod that needs a PV lands on the same node each time it is recreated ...

-- Hugo Koopmans
Source: StackOverflow