Kubernetes deployment read-only filesystem error

4/2/2018

I am facing an error while deploying Airflow on Kubernetes (precisely this version of Airflow https://github.com/puckel/docker-airflow/blob/1.8.1/Dockerfile) regarding writing permissions onto the filesystem.

The error displayed on the logs of the pod is:

sed: couldn't open temporary file /usr/local/airflow/sed18bPUH: Read-only file system
sed: -e expression #1, char 131: unterminated `s' command
sed: -e expression #1, char 118: unterminated `s' command
Initialize database...
sed: couldn't open temporary file /usr/local/airflow/sedouxZBL: Read-only file system
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/airflow/configuration.py", line 769, in
    ....
    with open(TEST_CONFIG_FILE, 'w') as f:
IOError: [Errno 30] Read-only file system: '/usr/local/airflow/unittests.cfg'

It seems that the filesystem is read-only but I do not understand why it is. I am not sure if it is a Kubernetes misconfiguration (do I need a special RBAC for pods ? No idea) or if it is a problem with the Dockerfile.

The deployment file looks like the following:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: airflow
  namespace: test
spec:
  replicas: 1
  revisionHistoryLimit: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  template:
    metadata:
      labels:
        app: airflow
    spec:
      restartPolicy: Always
      containers:
      - name: webserver
        image: davideberdin/docker-airflow:0.0.4
        imagePullPolicy: Always
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 50m
            memory: 128Mi
        securityContext:  #does not have any effect
          runAsUser: 0    #does not have any effect
        ports:
        - name: airflow-web
          containerPort: 8080
        args: ["webserver"]
        volumeMounts:
          - name: airflow-config-volume
            mountPath: /usr/local/airflow
            readOnly: false #does not have any effect
          - name: airflow-logs
            mountPath: /usr/local/logs
            readOnly: false #does not have any effect
      volumes:
      - name: airflow-config-volume
        secret:
          secretName: airflow-config-secret
      - name: airflow-parameters-volume
        secret:
          secretName: airflow-parameters-secret
      - name: airflow-logs
        emptyDir: {}

Any idea how I can make the filesystem writable? The container is running as USER airflow but I think that this user has root privileges.

-- spaghettifunk
airflow
deployment
docker
kubernetes
sed

2 Answers

7/23/2018

Since kubernetes version 1.9 and forth, volumeMounts behavior on secret, configMap, downwardAPI and projected have changed to Read-Only by default.

A workaround to the problem is to create an emtpyDir volume and copy the contents into it and execute/write whatever you need.

this is a small snippet to demonstrate.

    initContainers:
    - name: copy-ro-scripts
      image: busybox
      command: ['sh', '-c', 'cp /scripts/* /etc/pre-install/']
      volumeMounts:
        - name: scripts
          mountPath: /scripts
        - name: pre-install
          mountPath: /etc/pre-install
   volumes:
      - name: pre-install
        emptyDir: {}
      - name: scripts
        configMap:
          name: bla

Merged PR which causes this break :( https://github.com/kubernetes/kubernetes/pull/58720

-- levich
Source: StackOverflow

4/4/2018
    volumeMounts:
      - name: airflow-config-volume
        mountPath: /usr/local/airflow
  volumes:
  - name: airflow-config-volume
    secret:
      secretName: airflow-config-secret

Is the source of your problems, for two reasons: first, you have smashed the airflow user's home directory by volume mounting your secret onto the image directly into a place where the image expects a directory owned by airflow.

Separately, while I would have to fire up a cluster to confirm 100%, I am pretty sure that Secret volume mounts -- and I think their ConfigMap friends -- are read-only projections into the Pod filesystems; that suspicion certainly appears to match your experience. There is certainly no expectation that changes to those volumes propagate back up into the kubernetes cluster, so why pretend otherwise.

If you want to continue to attempt such a thing, you do actually have influence over the defaultMode of the files projected into that volumeMount, so you could set them to 0666, but caveat emptor for sure. The short version is, by far, not to smash $AIRFLOW_HOME with a volume mount.

-- mdaniel
Source: StackOverflow