Kubernetes persistent volume claim overriding existing directory's owner and permissions

8/30/2019

In Kubernetes, I am having a directory permission problem. I am testing with a pod to create a bare-bones elasticsearch instance, built off of an ElasticSearch provided docker image.

If I use a basic .yaml file to define the container, everything starts up. The problem happens when I attempt to replace a directory created from the docker image with a directory created from mounting of the persistent volume.

The original directory was

drwxrwxr-x  1 elasticsearch root   4096 Aug 30 19:25 data

and if I mount the persistent volume, it changes the owner and permissions to

drwxr-xr-x  2 root          root   4096 Aug 30 19:53 data

Now with the elasticsearch process running a the elasticsearch user, this directory can longer be accessed.

I have set the pod's security context's fsGroup to 1000, to match the group of the elasticsearch group. I have set the container's security context's runAsUser to 0. I have set various other combinations of users and group, but to no avail.

Here is my pod, persistent volume claim, and persistent volume definitions.

Any suggestions are welcome.

apiVersion: v1
kind: Pod
metadata:
  name: elasticfirst
  labels:
    app: elasticsearch

spec:
  securityContext:
    fsGroup: 1000

  containers:
  - name: es01
    image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
    securityContext:
      runAsUser: 0
    resources:
      limits:
        memory: 2Gi
        cpu: 200m
      requests:
        memory: 1Gi
        cpu: 100m
    env: 
      - name: node.name
        value: es01
      - name: discovery.seed_hosts
        value: es01
      - name: cluster.initial_master_nodes
        value: es01
      - name: cluster.name
        value: elasticsearch-cluster
      - name: bootstrap.memory_lock
        value: "true"
      - name: ES_JAVA_OPTS
        value: "-Xms1g -Xmx2g"
    ports:
    - containerPort: 9200
    volumeMounts:
    - mountPath: "/usr/share/elasticsearch/data"
      name: elastic-storage2
  nodeSelector:
    type: compute

  volumes:
  - name: elastic-storage2
    persistentVolumeClaim:
      claimName: elastic-storage2-pvc 



apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: elastic-storage2-pvc
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 512Mi


apiVersion: v1
kind: PersistentVolume
metadata:
  name: elastic-storage2-pv
spec:
  storageClassName: local-storage
  capacity:
    storage: 512Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /var/tmp/pv
-- Scott S
elasticsearch
kubernetes

1 Answer

9/1/2019

Your question is a tiny bit confusing about what is happening versus what you want to be happening, but in general that problem is a common one; that's why many setups use an initContainer: to change the ownership of freshly provisioned PersistentVolumes (as in this example)

In such a setup, the initContainer: would run as root, but would also presumably be a very thin container whose job is only to chown and then exit, leaving your application container -- elasticsearch in your example -- free to run as an unprivileged user

spec:
  initContainers:
  - name: chown
    image: busybox
    command:
    - chown
    - -R
    - "1000:1000"
    - /the/data
    volumeMounts:
    - name: es-data
      mountPoint: /the/data
 containers:
 - name: es
   # etc etc
-- mdaniel
Source: StackOverflow