I am trying to configure my Kubernetes cluster to use a local NFS server for persistent volumes.
I set up the PersistentVolume as follows:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: hq-storage-u4
  namespace: my-ns
spec:
  capacity:
    storage: 10Ti
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /data/u4
    server: 10.30.136.79
    readOnly: falseThe PV looks OK in kubectl
$ kubectl get pv
NAME            CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM             STORAGECLASS   REASON    AGE
hq-storage-u4   10Ti       RWX           Retain          Released   my-ns/pv-50g                               49mI then try to create the PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-50gb
  namespace: my-ns
spec:
  accessModes:
  - ReadWriteMany
  resources:
     requests:
       storage: 5GiKubectl shows the pvc status is Pending
$ kubectl get pvc
NAME       STATUS    VOLUME    CAPACITY   ACCESSMODES   STORAGECLASS   AGE
pvc-50gb   Pending                                                     16mWhen I try to add the volume to a deployment, I get the error:
[SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected.]How to I get the pvc to a working state?
I can't comment on your post so I'll just attempt to answer this.
I've encountered 2 kinds of errors when PVCs don't work on my NFS cluster. Installing a PV usually succeed, so the status message provided doesn't say much.
mount -t nfs 10.30.136.79:/data/u4 /mnt on the node that is supposed to mount the NFS resource. This should succeed. If this fails, it could be/etc/exports in your NFS server.One more thing, a non-privileged user in the pod might have trouble writing to the NFS resource. The uid/gid of the NFS user in the pod must match the perms of the NFS resource.
Bonne chance!
It turned out that I needed to put the IP (I also put the path) in quotes. After fixing that, the pvc goes to status Bound, and the pod can mount correctly.