Google Kubernetes Engine (GKE) cluster `error while creating mount source path` due to `read-only file system`

10/19/2020

I have a container with the following configuration:

spec:
  template:
    spec:
      restartPolicy: OnFailure
      volumes:
        - name: local-src
          hostPath:
            path: /src/analysis/src
           type: DirectoryOrCreate
      containers:
          securityContext:
            privileged: true
            capabilities:
              add:
                - SYS_ADMIN
  • Note that I'm intentionally omitting some other configuration parameters to keep the question short

However, when I deploy it to my cluster on kubernetes on gcloud, I see the following error:

Error: failed to start container "market-state": Error response from daemon: error while creating mount source path '/src/analysis/src': mkdir /src: read-only file system

I have tried deploying the exact same job locally with minikube and it works fine.

My guess is that this has to do with the pod's permissions relative to the host, but I expected it to work given the SYS_ADMIN permissions that I'm setting. When creating my cluster, I gave it a devstorage.read_write scope for other reason, but am wondering if there are other scopes I need as well?

gcloud container clusters create my_cluster \
    --zone us-west1-a \
    --node-locations us-west1-a \
    --scopes=https://www.googleapis.com/auth/devstorage.read_write

DirectoryOrCreate

-- Olshansk
gcloud
google-kubernetes-engine
kubernetes
permissions
readonly

2 Answers

10/30/2020

As pointed by user @DazWilkin:

IIUC, if your cluster is using Container-Optimized VMs, you'll need to be aware of the structure of the file system for these instances.

See https://cloud.google.com/container-optimized-os/docs/concepts/disks-and-filesystem

This is correct understanding. You can't write to readonly location like: / (even with the SYS_ADMIN and privileged parameters) because of the:

The root filesystem is mounted as read-only to protect system integrity. However, home directories and /mnt/stateful_partition are persistent and writable.

-- Cloud.google.com: Container optimized OS: Docs: Concepts: Disk and filesystem: Filesystem

As for a workaround solution you can change the location of your hostPath on the node or use GKE with nodes that uses Ubuntu images instead of Container Optimized OS images. You will be able to use hostPath volumes with paths as specified in your question. You can read more about available node images by following official documentation:


If your workload/use case allows using Persistent Volumes, I encourage you to do so.

PersistentVolume resources are used to manage durable storage in a cluster. In GKE, PersistentVolumes are typically backed by Compute Engine persistent disks.

<--->

PersistentVolumes are cluster resources that exist independently of Pods. This means that the disk and data represented by a PersistentVolume continue to exist as the cluster changes and as Pods are deleted and recreated. PersistentVolume resources can be provisioned dynamically through PersistentVolumeClaims, or they can be explicitly created by a cluster administrator.

-- Cloud.google.com: Kubernetes Engine: Persistent Volumes

You can also consider looking on Local SSD solution which can use hostPath type of Volume:


When creating my cluster, I gave it a devstorage.read_write scope for other reason, but am wondering if there are other scopes I need as well?

You can create GKE cluster without adding any additional scopes like:

  • $ gcloud container clusters create --zone=ZONE

The: --scopes=SCOPE will depend on the workload you are intending to run on it. You can assign scopes that will grant you access to specific Cloud Platform services (like Cloud Storage for example).

You can read more about it by following gcloud online manual:

To add to the topic of authentication to Cloud Platform services:

There are three ways to authenticate to Google Cloud services using service accounts from within GKE:

  1. Use Workload Identity

Workload Identity is the recommended way to authenticate to Google Cloud services from GKE. Workload Identity allows you to configure Google Cloud service accounts using Kubernetes resources. If this fits your use case, it should be your first option. This example is meant to cover use cases where Workload Identity is not a good fit.

  1. Use the default Compute Engine Service Account

Each node in a GKE cluster is a Compute Engine instance. Therefore, applications running on a GKE cluster by default will attempt to authenticate using the "Compute Engine default service account", and inherit the associated scopes.

This default service account may or may not have permissions to use the Google Cloud services you need. It is possible to expand the scopes for the default service account, but that can create security risks and is not recommended.

  1. Manage Service Account credentials using Secrets

Your final option is to create a service account for your application, and inject the authentication key as a Kubernetes secret. This will be the focus of this tutorial.

-- Cloud.google.com: Kubernetes Engine: Authenticating to Cloud Platform

-- Dawid Kruk
Source: StackOverflow

10/19/2020

IIUC, if your cluster is using Container-Optimized VMs, you'll need to be aware of the structure of the file system for these instances.

See https://cloud.google.com/container-optimized-os/docs/concepts/disks-and-filesystem

-- DazWilkin
Source: StackOverflow