issue with ownership on mounted volumes in a Kubernetes Pod

2/2/2022

I am trying to get a stateful PostgreSQL running in a tanzu k8s cluster ...

~> kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.8", GitCommit:"5575935422cc1cf5169dfc8847cb587aa47bac5a", GitTreeState:"clean", BuildDate:"2021-06-16T13:00:45Z", GoVersion:"go1.15.13", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.8+vmware.1", GitCommit:"3e397df2f5dadadfa35958ec45c14b0e81abc25f", GitTreeState:"clean", BuildDate:"2021-06-21T16:59:40Z", GoVersion:"go1.15.13", Compiler:"gc", Platform:"linux/amd64"}

and have some trouble with it.

I use a custom image where postgres runs as the postgres user and 3 volumes should be mounted. Now it seems k8s mounts those volumes as root:root and due to that the pod never spins up with this error message.

> kcl logs statefulset.apps/postgres-stateful
starting up postgres docker image:
postgres -D /opt/db/data/postgres/data
+ echo 'starting up postgres docker image:'
+ echo postgres -D /opt/db/data/postgres/data
+ '[' '!' -d /opt/db/data/postgres/data ']'
+ '[' '!' -O /opt/db/data/postgres/data ']'
+ mkdir -p /opt/db/data/postgres/data
+ chmod 700 /opt/db/data/postgres/data
chmod: changing permissions of '/opt/db/data/postgres/data': Operation not permitted

This relates to the docker-entrypoint.sh running inside the container upon creation. Now I have come to the point where it looks like I have to make sure the container is being run by the postgres user (which is defined in the USER directive of the Dockerfile my custom image is based upon). When I run the image directly (either podman run ... or kubectl run ...) everything works.

I found this thread on the issue which implies this being a solution

apiVersion: v1
kind: Pod
metadata:
  name: hello-world
spec:
  containers:
  # specification of the pod's containers
  # ...
  securityContext:
    fsGroup: 1234

I have adopted this pattern to the statefulSet I am using, but seem not to be able to make it work.

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres-stateful
  labels:
    app: postgres
spec:
  serviceName: "postgres"
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: docker-dev-local.intern.net/ina/postgresql:14.1-scm-debian-bullseye-build-74-4
        envFrom:
        - configMapRef:
            name: postgres-configuration
        ports:
        - containerPort: 5432
          name: postgresdb
        volumeMounts:
        - name: pv-data
          mountPath: /opt/db/data/postgres/data
        - name: pv-backup
          mountPath: /opt/db/backup/postgres
        - name: pv-arch
          mountPath: /opt/db/backup/postgres/arch
      securityContext:
        runAsUser: 1000   # postgres UID
        runAsGroup: 1000
        fsGroup: 1000
      volumes:
      - name: pv-data
        persistentVolumeClaim:
          claimName: pgdata33-pvc
      - name: pv-backup
        persistentVolumeClaim:
          claimName: pgbackup33-pvc
      - name: pv-arch
        persistentVolumeClaim:
          claimName: pgarch33-pvc

Now I am wondering whether the location of the securityContext (same level as containers & volumes) may be wrong. Can anybody kindly advise on this matter?

-- vrms
kubernetes
postgresql

1 Answer

2/3/2022

fsGroup requires support from the storage.

As you've confirmed, you are using hostPath volumes.
In this case fsGroup is not supposed to work.
It's disabled for hostPath for security reasons.

So yes, generally init container (run under root user) is the only viable option for hostPath.

-- Olesya Bolobova
Source: StackOverflow