Cannot get Pod to bind local-storage in minikube. "node(s) didn't find available persistent volumes", "waiting for first consumer to be created"

9/7/2020

I'm having some trouble configuring my Kubernetes deployment on minikube use local-storage. I'm trying to set up a rethinkdb instance that will mount a directory from the minikube VM to the rethinkdb Pod. My setup is the following

Storage

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: rethinkdb-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/data
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rethinkdb-pv-claim
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

So I define a storageClass of local-storage type as described online in the tutorials. I then make a PersistentVolume that asks for 10GB of storage from the /mnt/data path on the underlying host. I have made this directory on the minikube VM

$ minikube ssh
$ ls /mnt
data  sda1

This PersistentVolume has the storage class of local-storage and requests it from nodes matching the nodeAffinity section of hostname in 'minikube'.

I then make a PersistentVolumeClaim that asks for the type local-storage and requests 5GB.

Everything is good here, right? Here is the output of kubectl

$ kubectl get pv,pvc,storageClass
NAME                            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    REASON   AGE
persistentvolume/rethinkdb-pv   10Gi       RWO            Delete           Available           local-storage            9m33s

NAME                                       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
persistentvolumeclaim/rethinkdb-pv-claim   Pending                                      local-storage   7m51s

NAME                                             PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/local-storage        kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  9m33s
storageclass.storage.k8s.io/standard (default)   k8s.io/minikube-hostpath       Delete          Immediate              false                  24h

RethinkDB Deployment

I now attempt to make a Deployment with a single replica of the standard RethinkDB container.

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    name: database
  name: rethinkdb
spec:
  progressDeadlineSeconds: 2147483647
  replicas: 1
  selector:
    matchLabels:
      service: rethinkdb
  template:
    metadata:
      creationTimestamp: null
      labels:
        service: rethinkdb
    spec:
      containers:
      - name: rethinkdb
        image: rethinkdb:latest
        volumeMounts:
        - mountPath: /data
          name: rethinkdb-data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: rethinkdb-data
        persistentVolumeClaim:
          claimName: rethinkdb-pv-claim

This asks for a single replica of rethinkdb and it tries to mount the rethinkdb-pv-claim Persistent Volume Claim as the name rethinkdb-data and then attempts to mount that at /data in the container.

This is what shows, though

Name:           rethinkdb-6dbf4ccdb-64gk5
Namespace:      development
Priority:       0
Node:           <none>
Labels:         pod-template-hash=6dbf4ccdb
                service=rethinkdb
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/rethinkdb-6dbf4ccdb
Containers:
  rethinkdb:
    Image:        rethinkdb:latest
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /data from rethinkdb-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-d5ncp (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  rethinkdb-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  rethinkdb-pv-claim
    ReadOnly:   false
  default-token-d5ncp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-d5ncp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  73s (x7 over 8m38s)  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.

"1 node(s) didn't find available persistent volumes to bind". I'm not sure how that is because the PVC is available.

$ kubectl describe pvc
Name:          rethinkdb-pv-claim
Namespace:     development
StorageClass:  local-storage
Status:        Pending
Volume:
Labels:        <none>
Annotations:   Finalizers:  [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    rethinkdb-6dbf4ccdb-64gk5
Events:
  Type    Reason                Age                 From                         Message
  ----    ------                ----                ----                         -------
  Normal  WaitForFirstConsumer  11s (x42 over 10m)  persistentvolume-controller  waiting for first consumer to be created before binding

I think one hint is that the field

Nodes <null> for the Pod - does that mean it isn't assigned to a node?

-- John Allard
kubernetes
minikube

1 Answer

9/7/2020

I think the issue is that one of mine was ReadWriteOnce and the other one was ReadWriteMany, then I had trouble getting permissions right when running minikube mount /tmp/data:/mnt/data so I just got rid of mounting it to the underlying filesystem and now it works

-- John Allard
Source: StackOverflow