GKE ReadOnlyMany Persistent Volume with ReadOnlyMany Claims in Multiple Namespaces

7/18/2016

I have a disk image with mirrors of some protein databases (HHsearch, BLAST, PDB, etc.) That I build with some CI tooling, and write to a GCE disk to run against. I'd like to access this ReadOnlyMany PV in Pods created by ReplicationControllers in multiple namespaces via PersistentVolumeClaims but I'm not getting the expected result.

The PersistentVolume configuration looks like this;

apiVersion: v1
kind: PersistentVolume
metadata:
  name: "databases"
spec:
  capacity:
    storage: 500Gi
  accessModes:
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  gcePersistentDisk:
    pdName: "databases-us-central1-b-kube"
    fsType: "ext4"

How it looks when loaded into kubernetes;

$ kubectl describe pv
Name:       databases
Labels:     <none>
Status:     Bound
Claim:      production/databases
Reclaim Policy: Retain
Access Modes:   ROX
Capacity:   500Gi
Message:
Source:
    Type:   GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
    PDName: databases-us-central1-b-kube
    FSType: ext4
    Partition:  0
    ReadOnly:   false

The PVC configurations are all identical, and look like this;

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: databases
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage:
  volumeName: databases

And the PVCs as they look in the system;

$ for ns in {development,staging,production}; do kubectl describe --namespace=$ns pvc; done
Name:       databases
Namespace:  development
Status:     Pending
Volume:     databases
Labels:     <none>
Capacity:   0
Access Modes:


Name:       databases
Namespace:  staging
Status:     Pending
Volume:     databases
Labels:     <none>
Capacity:   0
Access Modes:


Name:       databases
Namespace:  production
Status:     Bound
Volume:     databases
Labels:     <none>
Capacity:   0
Access Modes:

I'm seeing lots of timeout expired waiting for volumes to attach/mount for pod "mypod-anid""[namespace]". list of unattached/unmounted volumes=[databases] when I do $ kubectl get events --all-namespaces

When I scale the RC 1->2 in production (where one pod did manage to bind the PV), the second Pod fails to mount the same PVC. When I create a second ReplicationController and PersistentVolumeClaim in my production namespace (recall that this is where the pod that successfully mounted the pv lives) backed by the same PersistentVolume, the second Pod/PVC cannot bind.

Am I missing something? How is one supposed to actually use an ROX PersistentVolume with PersistentVolumeClaims?

-- pnovotnak
google-compute-engine
google-kubernetes-engine
kubernetes

1 Answer

8/2/2016

A single PV can only be bound to a single PVC at a given time, regardless of whether it is ReadOnlyMany or not (once a PV/PVC binds, the PV can't bind to any other PVC).

Once a PV/PVC is bound, ReadOnlyMany PVCs may be referenced from multiple pods. In Peter's case, however, he can't use a single PVC object since he is trying to refer to it from multiple namespaces (PVCs are namespaced while PV objects are not).

To make this scenario work, create multiple PV objects that are identical (referring to the same disk) except for the name. This will allow each PVC object (in all namespaces) to find a PV object to bind to.

-- Saad Ali
Source: StackOverflow