Minikube mounted host folders are not working

8/8/2019

I am using ubuntu 18 with minikube and virtual box and trying to mount the host's directory in order to get the input data my pod needs.

I found that minikube has issues with mounting host directories, but by default according to your OS and vm driver, there are directories that are mounted by default

I can't find those on my pods. They are simply not there.

I tried to create a persistent volume, it works, I can see it on my dashboard, but I cant mount it to the pod, I used this yaml to create the volume

{
  "kind": "PersistentVolume",
  "apiVersion": "v1",
  "metadata": {
    "name": "pv0003",
    "selfLink": "/api/v1/persistentvolumes/pv0001",
    "uid": "28038976-9ee4-414d-8478-b312a24a6b94",
    "resourceVersion": "2030",
    "creationTimestamp": "2019-08-08T10:48:23Z",
    "annotations": {
      "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"PersistentVolume\",\"metadata\":{\"annotations\":{},\"name\":\"pv0001\"},\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"capacity\":{\"storage\":\"5Gi\"},\"hostPath\":{\"path\":\"/data/pv0001/\"}}}\n"
    },
    "finalizers": [
      "kubernetes.io/pv-protection"
    ]
  },
  "spec": {
    "capacity": {
      "storage": "6Gi"
    },
    "hostPath": {
      "path": "/user/data",
      "type": ""
    },
    "accessModes": [
      "ReadWriteOnce"
    ],
    "persistentVolumeReclaimPolicy": "Retain",
    "volumeMode": "Filesystem"
  },
  "status": {
    "phase": "Available"
  }
}

And this yaml to create the job.

apiVersion: batch/v1
kind: Job
metadata:
  name: pi31
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["sleep"]
        args: ["300"]
        volumeMounts:
        - mountPath: /data
          name: pv0003
      volumes:
        - name: pv0003
          hostPath:
            path: /user/data
      restartPolicy: Never
  backoffLimit: 1

I also tried to create the volumnes acording to the so called default mount paths but with no success.

I tried to add the volume claim to the job creation yaml, still nothing.

When I mount the drives and create them in the job creation yaml files, the jobs are able to see the data that other jobs create, but it's invisible to the host, and the host's data is invisible to them.

I am running minikube from my main user, and checked the logs in the dashboard, not getting any permissions error

Is there any way to get data into this minikube without setting up NFS? I am trying to use it for an MVP, the entire idea is for it to be simple...

-- thebeancounter
devops
docker
kubernetes
kubernetes-pod
ubuntu

1 Answer

8/8/2019

It's not so easy as minikube is working inside VM created in Virtualbox that's why using hostPath you see that VM's file system instead of your PC.

I would really recommend to use minikube mount command - you can find description there

From docs:

minikube mount /path/to/dir/to/mount:/vm-mount-path is the recommended way to mount directories into minikube so that they can be used in your local Kubernetes cluster.

So after that you can share your host's files inside minikube Kubernetes.

Edit:

Here is log step-by-step how to test it:

➜  ~ minikube start
* minikube v1.3.0 on Ubuntu 19.04
* Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
* Starting existing virtualbox VM for "minikube" ...
* Waiting for the host to be provisioned ...
* Preparing Kubernetes v1.15.2 on Docker 18.09.6 ...
* Relaunching Kubernetes using kubeadm ... 
* Waiting for: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
➜  ~ mkdir -p /tmp/test-dir
➜  ~ echo "test-string" > /tmp/test-dir/test-file
➜  ~ minikube mount /tmp/test-dir:/test-dir
* Mounting host path /tmp/test-dir into VM as /test-dir ...
  - Mount type:   <no value>
  - User ID:      docker
  - Group ID:     docker
  - Version:      9p2000.L
  - Message Size: 262144
  - Permissions:  755 (-rwxr-xr-x)
  - Options:      map[]
* Userspace file server: ufs starting
* Successfully mounted /tmp/test-dir to /test-dir

* NOTE: This process must stay alive for the mount to be accessible ...

Now open another console:

➜  ~ minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ cat /test-dir/test-file 
test-string

Edit 2:

example job.yml

apiVersion: batch/v1
kind: Job
metadata:
  name: test
spec:
  template:
    spec:
      containers:
      - name: test
        image: ubuntu
        command: ["cat", "/testing/test-file"]
        volumeMounts:
        - name: test-volume
          mountPath: /testing
      volumes:
      - name: test-volume
        hostPath:
          path: /test-dir
      restartPolicy: Never
  backoffLimit: 4
-- Jakub Bujny
Source: StackOverflow