Kubernetes NFS PersistentVolumeClaim has status Pending

6/14/2017

I am trying to configure my Kubernetes cluster to use a local NFS server for persistent volumes.

I set up the PersistentVolume as follows:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: hq-storage-u4
  namespace: my-ns
spec:
  capacity:
    storage: 10Ti
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /data/u4
    server: 10.30.136.79
    readOnly: false

The PV looks OK in kubectl

$ kubectl get pv
NAME            CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM             STORAGECLASS   REASON    AGE
hq-storage-u4   10Ti       RWX           Retain          Released   my-ns/pv-50g                               49m

I then try to create the PersistentVolumeClaim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-50gb
  namespace: my-ns
spec:
  accessModes:
  - ReadWriteMany
  resources:
     requests:
       storage: 5Gi

Kubectl shows the pvc status is Pending

$ kubectl get pvc
NAME       STATUS    VOLUME    CAPACITY   ACCESSMODES   STORAGECLASS   AGE
pvc-50gb   Pending                                                     16m

When I try to add the volume to a deployment, I get the error:

[SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected.]

How to I get the pvc to a working state?

-- zoidberg
docker
kubernetes
nfs

2 Answers

6/15/2017

I can't comment on your post so I'll just attempt to answer this.

I've encountered 2 kinds of errors when PVCs don't work on my NFS cluster. Installing a PV usually succeed, so the status message provided doesn't say much.

  1. The annotation and spec of the PV and the PVC are dissimilar. This doesn't look like the case.
  2. The node of the pod that uses the NFS resource cannot mount the resource. Try mount -t nfs 10.30.136.79:/data/u4 /mnt on the node that is supposed to mount the NFS resource. This should succeed. If this fails, it could be
    1. The lack of mount permissions. Rectify /etc/exports in your NFS server.
    2. A firewall blocking the NFS ports. Fix the firewall.

One more thing, a non-privileged user in the pod might have trouble writing to the NFS resource. The uid/gid of the NFS user in the pod must match the perms of the NFS resource.

Bonne chance!

-- Eugene Chow
Source: StackOverflow

6/15/2017

It turned out that I needed to put the IP (I also put the path) in quotes. After fixing that, the pvc goes to status Bound, and the pod can mount correctly.

-- zoidberg
Source: StackOverflow