What are reasons for "Unable to mount volumes for pod" "timeout expired waiting for ..." in Google Kubernetes Engine?

4/25/2019

I am deploying 3 pods to Google Kubernetes Engine. Two of the pods share ReadOnlyMany bindings to a pre-existing persistent volume, mapped to a gcePersistentDisk. One of the pods starts. The other doesn't, and eventually times out with the error "Unable to mount volumes for pod"

There are no errors shown under kubectl describe pv or kubectl describe pvc. kubectl describe pvc shows that each persistent volume claim is bound to the pod that isn't starting.

Relevant configuration:

kind: PersistentVolume
apiVersion: v1
metadata:
 name: configuration
spec:
  capacity:
    storage: 1G
  accessModes:
    - ReadOnlyMany
  storageClassName: ""
  gcePersistentDisk:
    fsType: ext4
    readOnly: true
    pdName: my-persistent-disk-name
  persistentVolumeReclaimPolicy: Retain
---
kind: PersistentVolume
apiVersion: v1
metadata:
 name: kb
spec:
  capacity:
    storage: 1G
  accessModes:
    - ReadOnlyMany
  storageClassName: ""
  gcePersistentDisk:
    fsType: ext4
    readOnly: true
    pdName: my-persistent-disk-name
  persistentVolumeReclaimPolicy: Retain

---
kind: PersistentVolume
apiVersion: v1
metadata:
 name: content
spec:
  capacity:
    storage: 1G
  accessModes:
    - ReadOnlyMany
  storageClassName: ""
  gcePersistentDisk:
    fsType: ext4
    readOnly: true
    pdName: my-persistent-disk-name
  persistentVolumeReclaimPolicy: Retain
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: config-pvc
spec:
  accessModes:
    - ReadOnlyMany
  volumeName: configuration
  storageClassName: ""
  resources:
    requests:
      storage: 1G
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: kb-pvc
spec:
  accessModes:
    - ReadOnlyMany
  volumeName: kb
  storageClassName: ""
  resources:
    requests:
      storage: 1G
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: content-pvc
spec:
  accessModes:
    - ReadOnlyMany
  volumeName: content
  storageClassName: ""
  resources:
    requests:
      storage: 1G
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: worker
spec:
  template:
    spec:
      containers:
        - name: esp
          image: gcr.io/endpoints-release/endpoints-runtime:1.20.0
          args: [ "-http-port", "8080", ... ]
        - name: workers
          image: my-registry/my-image:my-version
          volumeMounts:
            - name: config
              mountPath: /config
              subPath: ./config
              readOnly: true
            - name: kb
              mountPath: /kb
              subPath: ./kb
              readOnly: true
            - name: content
              mountPath: /content
              subPath: ./content
              readOnly: true
      volumes:
         - name: config
           persistentVolumeClaim: 
             claimName: config-pvc
             readOnly: true
         - name: kb
           persistentVolumeClaim: 
             claimName: kb-pvc
             readOnly: true
         - name: content
           persistentVolumeClaim:
             claimName: content-pvc
             readOnly: true
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: another-worker
spec:
  template:
    spec:
      containers:
        - name: another-worker-name
          image: my-registry/my-other-image:my-other-version
         command: ["./run-server.sh", "--path-data", "/config/data/"]
          args: []
          volumeMounts:
            - name: kb
              mountPath: /config
              subPath: ./kb/i2k_context
              readOnly: true
      volumes:        
        - name: kb
          persistentVolumeClaim:
            claimName: kb-pvc
            readOnly: true

The pod named "worker" in the above example should start running. It doesn't, and eventually shows a timeout error with unmounted and/or unattached volumes.

The pod named "another-worker" starts and runs as expected.

-- Eric Schoen
google-kubernetes-engine
kubernetes

1 Answer

4/25/2019

It looks like GKE doesn't permit multiple PersistentVolume objects mapped to the same gcePersistentDisk, even if all of the volumes and volume claims are ReadOnlyMany.

The reason I had 3 different PV's for the same gcePersistentDisk was to allow for flexibility in deployment--the ability to have those three volume bindings actually be different persistent disks.

Since I'm not using the feature at the moment, I changed the volumes and volumeMounts in the worker pod to use one PVC with different subPath and mountPath and the pod started up immediately.

-- Eric Schoen
Source: StackOverflow