Kubernetes: failed to mount unformatted volume as read only

4/25/2019

I am trying to use gcePersistentDisk as ReadOnlyMany so that my pods on multiple nodes can read the data on this disk. Following the documentation here for the same.

To create and later format the gce Persistent Disk, I have followed the instructions in the documentation here. Following this doc, I have sshed into one of the nodes and have formatted the disk. See below the complete error and also the other yaml files.

kubectl describe pods -l podName

Name:               punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               gke-mycluster-default-pool-b1c1d316-d016/10.160.0.12
Start Time:         Thu, 25 Apr 2019 23:55:38 +0530
Labels:             app.kubernetes.io/instance=punk-fly
                    app.kubernetes.io/name=nodejs
                    pod-template-hash=1866836461
Annotations:        kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container nodejs
Status:             Pending
IP:
Controlled By:      ReplicaSet/punk-fly-nodejs-deployment-5dbbd7b8b5
Containers:
  nodejs:
    Container ID:
    Image:          rajesh12/smartserver:server
    Image ID:
    Port:           3002/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False

    Restart Count:  0
    Requests:
      cpu:  100m
    Environment:
      MYSQL_HOST:           mysqlservice
      MYSQL_DATABASE:       app
      MYSQL_ROOT_PASSWORD:  password
    Mounts:
      /usr/src/ from helm-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jpkzg (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  helm-vol:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  my-readonly-pvc
    ReadOnly:   true
  default-token-jpkzg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-jpkzg
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age               From                                               Message
  ----     ------                  ----              ----                                               -------
  Normal   Scheduled               2m                default-scheduler                                  Successfully assigned default/punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs to gke-mycluster-default-pool-b1c1d316-d016
  Normal   SuccessfulAttachVolume  1m                attachdetach-controller                            AttachVolume.Attach succeeded for volume "pvc-9c796180-677e-11e9-ad35-42010aa0000f"
  Warning  FailedMount             10s (x8 over 1m)  kubelet, gke-mycluster-default-pool-b1c1d316-d016  MountVolume.MountDevice failed for volume "pvc-9c796180-677e-11e9-ad35-42010aa0000f" : failed to mount unformatted volume as read only
  Warning  FailedMount             0s                kubelet, gke-mycluster-default-pool-b1c1d316-d016  Unable to mount volumes for pod "punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs_default(86293044-6787-11e9-ad35-42010aa0000f)": timeout expired waiting for volumes to attach or mount for pod "default"/"punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs". list of unmounted volumes=[helm-vol]. list of unattached volumes=[helm-vol default-token-jpkzg]

readonly_pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-readonly-pv
spec:
  storageClassName: ""
  capacity:
    storage: 1G
  accessModes:
    - ReadOnlyMany
  gcePersistentDisk:
    pdName: mydisk0
    fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-readonly-pvc
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 1G

deployment.yaml

  volumes:
    - name: helm-vol
      persistentVolumeClaim:
        claimName: my-readonly-pvc
        readOnly: true
  containers:
    - name: {{ .Values.app.backendName }}
      image: "{{ .Values.image.repository }}:{{ .Values.image.tagServer }}"
      imagePullPolicy: {{ .Values.image.pullPolicy }}
      env:
      - name: MYSQL_HOST
        value: mysqlservice
      - name: MYSQL_DATABASE
        value: app
      - name: MYSQL_ROOT_PASSWORD
        value: password
      ports:
        - name: http-backend
          containerPort: 3002
      volumeMounts:
        - name: helm-vol
          mountPath: /usr/src/
-- Rajesh Gupta
docker
google-kubernetes-engine
kubernetes

3 Answers

1/27/2020

I had the same error and managed to fix it using a few lines from a related article about using preexisting disks https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd

You need to add storageClassName and volumeName to your persistent volume claim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-claim-demo
spec:
  # It's necessary to specify "" as the storageClassName
  # so that the default storage class won't be used, see
  # https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
  storageClassName: ""
  volumeName: pv-demo
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 500G
-- Ievgen Goichuk
Source: StackOverflow

1/7/2020

I received the same error message when trying to provision a persistent disk with an access mode of ReadWriteOnce. What fixed the issue for me was removing the property readOnly: true from the volumes declaration of the Deployment spec. In the case of your deployment.yaml file, that would be this block:

volumes:
- name: helm-vol
  persistentVolumeClaim:
    claimName: my-readonly-pvc
    readOnly: true

Try removing that line and see if the error goes away.

-- Micah Knox
Source: StackOverflow

4/26/2019

It sounds like your PVC is dynamically provisioning a new volume that is not formatted with the default StorageClass

It could be that your Pod is being created in a different availability from where you have the PV provisioned. The gotcha with having multiple Pod readers for the gce volume is that the Pods always have to be in the same availability zone.

Some options:

  • Simply create and format the PV on the same availability zone where your node is.

  • When you define your PV you could specify Node Affinity to make sure it always gets assigned to a specific node.

  • Define a StorageClass that specifies the filesystem

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: mysc
    provisioner: kubernetes.io/aws-ebs
    parameters:
      type: gp2
      fsType: ext4

    And then use it in your PVC:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-pvc
    spec:
      accessModes:
        - ReadOnlyMany
      resources:
        requests:
          storage: 1G
      storageClassName: mysc

    The volume will be automatically provisioned and formatted.

-- Rico
Source: StackOverflow