Kubernetes Persistent Volume and hostpath

7/11/2018

I was experimenting with something with Kubernetes Persistent Volumes, I can't find a clear explanation in Kubernetes documentation and the behaviour is not the one I am expecting so I like to ask here.

I configured following Persistent Volume and Persistent Volume Claim.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: store-persistent-volume
  namespace: test
spec:
  storageClassName: hostpath
  capacity:
    storage: 2Gi
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: "/Volumes/Data/data"

---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: store-persistent-volume-claim
  namespace: test
spec:
  storageClassName: hostpath
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

and the following Deployment and Service configuration.

kind: Deployment
apiVersion: apps/v1beta2
metadata:
  name: store-deployment
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: store
  template:
    metadata:
      labels:
        k8s-app: store
    spec:
      volumes:
      - name: store-volume
        persistentVolumeClaim:
          claimName: store-persistent-volume-claim
      containers:
      - name: store
        image: localhost:5000/store
        ports:
        - containerPort: 8383
          protocol: TCP
        volumeMounts:
        - name: store-volume
          mountPath: /data

---
#------------ Service ----------------#

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: store
  name: store
  namespace: test
spec:
  type: LoadBalancer
  ports:
  - port: 8383
    targetPort: 8383
  selector:
    k8s-app: store

As you can see I defined '/Volumes/Data/data' as Persistent Volume and expecting that to mount that to '/data' container.

So I am assuming whatever in '/Volumes/Data/data' in the host should be visible at '/data' directory at container. Is this assumption correct? Because this is definitely not happening at the moment.

My second assumption is, whatever I save at '/data' should be visible at host, which is also not happening.

I can see from Kubernetes console that everything started correctly, (Persistent Volume, Claim, Deployment, Pod, Service...)

Am I understanding the persistent volume concept correctly at all?

PS. I am trying this in a Mac with Docker (18.05.0-ce-mac67(25042) -Channel edge), may be it should not work at Mac?

Thx for answers

-- posthumecaver
docker
kubernetes

3 Answers

5/20/2019

Assuming you are using multi-node Kubernetes cluster, you should be able to see the data mounted locally at /Volumes/Data/data on the specific worker node that pod is running

You can check on which worker your pod is scheduled by using the command kubectl get pods -o wide -n test

Please note, as per kubernetes docs, HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) PersistentVolume

It does work in my case.

-- Learner
Source: StackOverflow

3/29/2020

As you are using the host path, you should check this '/data' in the worker node in which the pod is running.

-- AATHITH RAJENDRAN
Source: StackOverflow

11/14/2019

Like the guy said above. You need to run a 'kubectl get po -n test -o wide' and you will see the node the pod is hosted on. Then if you SSH that worker you can see the volume

-- antandrades
Source: StackOverflow