Kubernetes : node(s) didn't find available persistent volumes to bind

11/2/2019

I am trying to set up a local storage as outlined here (https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/) . I'm getting the following error that the scheduler is unable to schedule the pods . The local storage is mapped to one of the worker node. I tried setting up the local storage on master node and I got the same error. WHere am I going wrong?

Warning FailedScheduling 24s (x2 over 24s) default-scheduler 0/3 nodes are available: 1 node(s) didn't match node selector, 2 node(s) didn't find available persistent volumes to bind.

-------------------------------------------------------------------

kubectl get nodes -o wide
NAME                  STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION      CONTAINER-RUNTIME
rpi-k8-workernode-2   Ready    <none>   92d   v1.15.0   192.168.100.50   <none>        Raspbian GNU/Linux 9 (stretch)   4.19.42-v7+         docker://18.9.0
rpi-mon-k8-worker     Ready    <none>   91d   v1.15.0   192.168.100.22   <none>        Raspbian GNU/Linux 9 (stretch)   4.19.42-v7+         docker://18.9.0
udubuntu              Ready    master   92d   v1.15.1   192.168.100.24   <none>        Ubuntu 18.04.3 LTS               4.15.0-55-generic   docker://19.3.4

-------------------------------------------------------------------
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
------------------------------------------------------------------------

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-ghost
  namespace: ghost
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt/mydrive/ghost-data/
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - rpi-mon-k8-worker
------------------------------------------------------------------------
apiVersion: v1

kind: PersistentVolumeClaim

metadata:
  name: pvc-ghost
  namespace: ghost
  labels:
    pv: pv-ghost

spec:
  accessModes:
    - ReadWriteOnce

  resources:
    requests:
      storage: 10Gi
  storageClassName: local-storage
  selector:
    matchLabels:
      name: pv-ghost
------------------------------------------------------------------------

apiVersion: apps/v1

kind: Deployment
metadata:
  name:
    deployment-ghost
  namespace: ghost
  labels:
    env: prod
    app: ghost-app

spec:
  template:
    metadata:
      name: ghost-app-pod
      labels:
        app:  ghost-app
        env:  production
    spec:
      containers:
        - name: ghost
          image: arm32v7/ghost
          imagePullPolicy: IfNotPresent
          volumeMounts:
           - mountPath: /var/lib/ghost/content
             name: ghost-blog-data
          securityContext:
            privileged: True
 volumes:
      - name: ghost-blog-data
        persistentVolumeClaim:
          claimName: pvc-ghost
      nodeSelector:
        beta.kubernetes.io/arch: arm

  replicas: 2
  selector:
    matchLabels:
      app: ghost-app


kubectl get nodes --show-labels
NAME                  STATUS   ROLES    AGE   VERSION   LABELS
rpi-k8-workernode-2   Ready    <none>   93d   v1.15.0   beta.kubernetes.io/arch=arm,beta.kubernetes.io/os=linux,kubernetes.io/arch=arm,kubernetes.io/hostname=rpi-k8-workernode-2,kubernetes.io/os=linux
rpi-mon-k8-worker     Ready    <none>   93d   v1.15.0   beta.kubernetes.io/arch=arm,beta.kubernetes.io/os=linux,kubernetes.io/arch=arm,kubernetes.io/hostname=rpi-mon-k8-worker,kubernetes.io/os=linux
udubuntu              Ready    master   93d   v1.15.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=udubuntu,kubernetes.io/os=linux,node-role.kubernetes.io/master=

-----------------------------------------------------------
ud@udubuntu:~/kube-files$ kubectl describe pvc pvc-ghost -n ghost
Name:          pvc-ghost
Namespace:     ghost
StorageClass:  manual
Status:        Pending
Volume:
Labels:        pv=pv-ghost
Annotations:   kubectl.kubernetes.io/last-applied-configuration:
                 {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"pv":"pv-ghost"},"name":"pvc-ghost","namespace":"...
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type    Reason                Age               From                         Message
  ----    ------                ----              ----                         -------
  Normal  WaitForFirstConsumer  6s (x2 over 21s)  persistentvolume-controller  waiting for first consumer to be created before binding
-- IT_novice
kubernetes

1 Answer

11/2/2019

As you can see from the warning 1 node(s) didn't match node selector, 2 node(s) didn't find available persistent volumes to bind., you set a nodeSelector in the deployment-ghost, so one of your worker nodes didn't match with this selector.If you delete the nodeSelector field from that .yaml file. In this way the pod will be deployed to a node where the PV is created. AFAIK, it isn't possible to deploy a pod to a worker which the PV used to claim is in the another worker node. And finally, in the other nodes, no PVs created. You can check the created PVs and PVCs by:

kubectl get pv
kubectl get pvc -n <namespace>

and the check the details of them by:

kubectl describe pv <pv_name> 
kubectl describe pv <pv_name> -n <namespace>

You issue is explained in the official documentation in her which says :

Claims can specify a label selector to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields:

1- matchLabels - the volume must have a label with this value

2- matchExpressions - a list of requirements made by specifying key, list of values, and operator that relates the key and values. Valid operators include In, NotIn, Exists, and DoesNotExist

So, edit the PersistentVolume file and add the labels field to be look like this :

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-ghost
  labels:
    name: pv-ghost
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt/mydrive/ghost-data/
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - rpi-mon-k8-worker

It isn't necessary to add the namespace field to kind: persistantVolume because PersistentVolumes binds are exclusive, and PersistentVolumeClaims are namespaced objects.

I test it and it works for me.

-- Majid Rajabi
Source: StackOverflow