GKE PersistentVolumeClaim for storageClassName "standard" is in pending state forever

11/5/2019

I applied my PVC yaml file to my GKE cluster and checked it's state. It says the follwing for the yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"teamcity","namespace":"default"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"3Gi"}}}}
    volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
  creationTimestamp: "2019-11-05T09:45:20Z"
  finalizers:
  - kubernetes.io/pvc-protection
  name: teamcity
  namespace: default
  resourceVersion: "1358093"
  selfLink: /api/v1/namespaces/default/persistentvolumeclaims/teamcity
  uid: fb51d295-ffb0-11e9-af7d-42010a8400aa
spec:
  accessModes:
  - ReadWriteMany
  dataSource: null
  resources:
    requests:
      storage: 3Gi
  storageClassName: standard
  volumeMode: Filesystem
status:
  phase: Pending

I did not created anything like a storage or whatever needs to be done for that? Because I read it as this is provided automatically by the GKE. Any idea what I am missing?

-- xetra11
google-kubernetes-engine
kubernetes
persistent-volume-claims

1 Answer

11/5/2019

GKE includes default support for GCP disk PV provisioning, however those implement ReadWriteOnce and ReadOnlyMany modes. I do not think GKE includes a provisioner for ReadWriteMany by default.

EDIT: While it's not set up by default (because it requires further configuration) How do I create a persistent volume claim with ReadWriteMany in GKE? shows how to use Cloud Filestore to launch a hosted NFS-compatible server and then aim a provisioner at it.

-- coderanger
Source: StackOverflow