When pods inside Minikube (k8s) claims Volumes, how to enforce all of them actually lie on a certain disk on the bare metal *host machine*?

4/30/2020

We have one bare metal machine with one SSD and one HDD. The pods in Minikube (k8s) will claim some pvc and get some volumes. We want to enforce that these volumes are actually on our SSD, not HDD. How to do that? Thanks very much!

p.s. What I have tried: When wanting a PVC, we find that Minikube will assign one on /tmp/hostpath-provisioner/.... In addition, IMHO this is the path inside the Docker that Minikube itself runs, not the host machine path. Thus, I have tried to minikube mount /data/minikube-my-tmp-hostpath-provisioner:/tmp/hostpath-provisioner where the /data of host bare metal machine is on SSD (not HDD). However, this makes the pods unhappy, and after a restart they all fail... In addition, I find that only new files will be written to the newly mounted path, while the existing files will still be inside the container...

-- ch271828n
docker
kubernetes
minikube

1 Answer

4/30/2020

This sounds exactly like the reason for which storage classes exist:

A StorageClass provides a way for administrators to describe the “classes” of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators. Kubernetes itself is unopinionated about what classes represent. This concept is sometimes called “profiles” in other storage systems.

So in other words - you can create multiple storage classes with different performance or other characteristics. And then decide which one is most appropriate for each claim they create.

For example, this is a storage class you can use on minikube:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
  type: pd-ssd

And you'll probably also need to create a PV, you can do that using:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: some-name-pv
spec:
  capacity: 
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /tmp/path

Then, finally, the PVC would like this:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: some-pvc 
spec:
  storageClassName: fast
  resources:
    requests:
      storage: 100Mi
  accessModes:
    - ReadWriteOnce
-- omricoco
Source: StackOverflow