Jupyterhub on Kubernetes: Automated pvcs are not creating new local persistent volumes

6/21/2021

I try to deploy Jupyterhub (Zero to Hero) on my local Kubernetes in a RHEL 8 machine. After hours of trying the basic service is running now. I created a pv for the main service, which works fine.

Name:              hub-db-dir
Labels:            <none>
Annotations:       pv.kubernetes.io/bound-by-controller: yes
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      local-storage
Status:            Bound
Claim:             jupyter/hub-db-dir
Reclaim Policy:    Retain
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          5Gi
Node Affinity:
  Required Terms:
    Term 0:        kubernetes.io/hostname in [host]
Message:
Source:
    Type:  LocalVolume (a persistent volume backed by local storage on a node)
    Path:  /temp
Events:    <none>

But as soon as I log in, I get the following message: Screenshot

I figured out that K8 doesn't create a new pv on it's own. Even when I create one (with the appropriate name), it fails.

Does anyone has a solution for that?

My StorageClass:

Name:            local-storage
IsDefaultClass:  Yes
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           kubernetes.io/no-provisioner
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>
-- Bltzz
jupyterhub
kubernetes
persistent-volume-claims
persistent-volumes

2 Answers

6/21/2021

From the info provided you have:

provisioner: kubernetes.io/no-provisioner

According to : https://kubernetes.io/docs/concepts/storage/storage-classes/#local

Local volumes do not currently support dynamic provisioning, however a StorageClass should still be created to delay volume binding until Pod scheduling. This is specified by the WaitForFirstConsumer volume binding mode.

I've had similar issues on cloud providers where volumes don't support some (standard) part of the config and so do not provision as expected. Alternatives are to use a different method of storage (cloud object storage/S3/etc or a database).

Also see:

https://kubernetes.io/docs/concepts/storage/volumes/#local

You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.

-- Rob Evans
Source: StackOverflow

6/22/2021

In general, you can start from the Kubernetes documentation. Here you can find storage-classes concept. You will also find information which solutions are supported in a certain way. This field must be specified. You are using local Kubernetes in a RHEL 8 machine, so local volumes could help you.

Look at the example:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv
spec:
  capacity:
    storage: 100Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/disks/ssd1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - example-node

Theis example shows a PersistentVolume using a local volume and nodeAffinity. You need to set a PersistentVolume nodeAffinity when using local volumes. It is also recommended to create a StorageClass with volumeBindingMode set to WaitForFirstConsumer.

Local volumes do not currently support dynamic provisioning, however a StorageClass should still be created to delay volume binding until Pod scheduling. This is specified by the WaitForFirstConsumer volume binding mode.

Delaying volume binding allows the scheduler to consider all of a Pod's scheduling constraints when choosing an appropriate PersistentVolume for a PersistentVolumeClaim.

If you are looking for complete guide to configure storage for bare metal cluster you can find it here. As I mentioned before local volumes do not currently support dynamic provisioning. however, you can get around this by using NFS Server.

An nfs volume allows an existing NFS (Network File System) share to be mounted into a Pod. Unlike emptyDir, which is erased when a Pod is removed, the contents of an nfs volume are preserved and the volume is merely unmounted. This means that an NFS volume can be pre-populated with data, and that data can be shared between pods. NFS can be mounted by multiple writers simultaneously. Note: You must have your own NFS server running with the share exported before you can use it.

Here you can find NFS example, based on official documentation. Follow also this guide to get more information, how to set up Kubernetes Bare-Metal Dynamic Storage Allocation.

-- Mikołaj Głodziak
Source: StackOverflow