Kubernetes Persistent Volume Claim Indefinitely in Pending State

7/3/2017

I created a PersistentVolume sourced from a Google Compute Engine persistent disk that I already formatted and provision with data. Kubernetes says the PersistentVolume is available.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: models-1-0-0
  labels:
    name: models-1-0-0
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadOnlyMany
  gcePersistentDisk:
    pdName: models-1-0-0
    fsType: ext4
    readOnly: true

I then created a PersistentVolumeClaim so that I could attach this volume to multiple pods across multiple nodes. However, kubernetes indefinitely says it is in a pending state.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: models-1-0-0-claim
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 200Gi
  selector:
    matchLabels:
      name: models-1-0-0

Any insights? I feel there may be something wrong with the selector...

Is it even possible to preconfigure a persistent disk with data and have pods across multiple nodes all be able to read from it?

-- Akash Krishnan
kubernetes
persistent-volume-claims
persistent-volumes

6 Answers

9/12/2018

I faced the same issue in which the PersistentVolumeClaim was in Pending Phase indefinitely, I tried providing the storageClassName as 'default' in PersistentVolume just like I did for PersistentVolumeClaim but it did not fix this issue.

I made one change in my persistentvolume.yml and moved the PersistentVolumeClaim config on top of the file and then PersistentVolume as the second config in the yml file. It has fixed that issue.

We need to make sure that PersistentVolumeClaim is created first and the PersistentVolume is created afterwards to resolve this 'Pending' phase issue.

I am posting this answer after testing it for a few times, hoping that it might help someone struggling with it.

-- Adnan Raza
Source: StackOverflow

5/4/2019

I've seen this behaviour in microk8s 1.14.1 when two PersistentVolumes have the same value for spec/hostPath/path, e.g.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-name
  labels:
    type: local
    app: app
spec:
  storageClassName: standard
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/k8s-app-data"

It seems that microk8s is event-based (which isn't necessary on a one-node cluster) and throws away information about any failing operations resulting in unnecessary horrible feedback for almost all failures.

-- Karl Richter
Source: StackOverflow

7/3/2017

I quickly realized that PersistentVolumeClaim defaults the storageClassName field to standard when not specified. However, when creating a PersistentVolume, storageClassName does not have a default, so the selector doesn't find a match.

The following worked for me:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: models-1-0-0
  labels:
    name: models-1-0-0
spec:
  capacity:
    storage: 200Gi
  storageClassName: standard
  accessModes:
    - ReadOnlyMany
  gcePersistentDisk:
    pdName: models-1-0-0
    fsType: ext4
    readOnly: true
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: models-1-0-0-claim
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 200Gi
  selector:
    matchLabels:
      name: models-1-0-0
-- Akash Krishnan
Source: StackOverflow

7/4/2017

With dynamic provisioning, you shouldn't have to create PVs and PVCs separately. In Kubernetes 1.6+, there are default provisioners for GKE and some other cloud environments, which should let you just create a PVC and have it automatically provision a PV and an underlying Persistent Disk for you.

For more on dynamic provisioning, see:

https://kubernetes.io/blog/2017/03/dynamic-provisioning-and-storage-classes-kubernetes/

-- Anirudh Ramanathan
Source: StackOverflow

2/13/2020

If you're using Microk8s, you have to enable storage before you can start a PersistentVolumeClaim successfully.

Just do:

microk8s.enable storage

You'll need to delete your deployment and start again.

You may also need to manually delete the "pending" PersistentVolumeClaims because I found that uninstalling the Helm chart which created them didn't clear the PVCs out.

You can do this by first finding a list of names:

kubectl get pvc --all-namespaces

then deleting each name with:

kubectl delete pvc name1 name2 etc...

Once storage is enabled, reapplying your deployment should get things going.

-- LondonRob
Source: StackOverflow

5/10/2019

Make sure your VM also has enough disk space.

-- William Loch
Source: StackOverflow