Create Kubernetes Persistent Volume with mounted directory

11/18/2020

In my /mnt/ I have a number of hard drives mounted (e.g. at /mnt/hdd1/, /mnt/hdd2/). Is there any way to make a K8s Persistent Volume on /mnt that can see the content of the hard drives mounted under /mnt? When I make a Local Persistent Volume on /mnt, the K8s pods see the directories hdd1 and hdd2, but they appear as empty.

The following is what I have tested:

Undesired solution 1:

I can make a Local Persistent Volume on /mnt/hdd1 and then my K8s pod will be able to see the contents of hdd1 hard drive. But as I mentioned before, I want my pod to see all the hard drives and I don't want to make a persistent volume for each hard drive especially when I mount a new hard drive under /mnt.

Undesired solution 2:

I can mount a Local Persistent Volume on /mnt/ with the K8s option of mountPropagation: HostToContainer in the yaml file for my deployment. In this case my pod will see the content of the hard drive if I remount the hard drive. But this is not desired because if the pod restarts, I need to remount the hard drive again for the pod to see its content! (Only works when hard drive is remounted when the pod is alive)

-- Ali_MM
kubernetes
mounted-volumes
persistent-volume-claims
persistent-volumes

2 Answers

1/23/2021

I was able to allow the pod to see all hard drives mounted in a directory using hostpath. A PersistentVolume can be defined in hostpath "mode." My final solution was:

  1. Most important part of the solution: Define a PersistentVolume in hostpath "mode," with a nodeAffinity to ensure it will only be mounted on the node with the hard drives:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: all-harddrives-pv
spec:
  volumeMode: Filesystem
  storageClassName: all-harddrives-storage
  hostPath:
    path: /mnt      # Where all the hard drives are mounted
    type: Directory
  nodeAffinity:     # Use nodeAffinity to ensure it will only be mounted on the node with harddrives.
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - MyNodeName
  1. Define a PersistentVolumeClaim that is bound to the above PV:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: all-harddrives-pvc
spec:
  storageClassName: all-harddrives-storage
  1. Mount it on the Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-deployment
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: my-deployment
    spec:
      containers:
        - name: mycontainername
          image: myimage
          volumeMounts:
            - mountPath: /mnt
              name: all-harddrives-pvc
      nodeSelector:
        kubernetes.io/hostname: MyNodeName
-- Ali_MM
Source: StackOverflow

1/23/2021

This approach, Local Persistence Volume Static Provisioner, suits better with Kubernetes way of working.

It supports metrics, storage lifecycle (eg. cleanup), node/pv affinity, extensible (eg. dynamic ephemeral storage). For example, with eks-nvme-ssd-provisioner, there can be a daemonset running to provision fast storage as local. This is ideal for workload that requires ephemeral local storage for data cache, fast compute, while no need to manually perform mount on the ec2 node before pods start.

Usage yaml examples are here, sig-storage-local-static-provisioner/examples.

-- gohm'c
Source: StackOverflow