I am deploying a stateful application in K8S.
Before that Im trying to implement an example.
Before deploying my-sql in my cluster, I have created a pv and a pvc.
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
At this point in time, I have not edit or created any new StorageClass
.
I then go on to deploy the application using a volumeMount
in my deployment.
.
.
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
I bring up the application successfully and, take it down (first the pod and the the deployment) and bring back the application and notice that my application data persists under /var/lib/mysql
I later on noticed that /mnt/data
does NOT exist on my host machine. I am working in minikube.
I looked into the storage class and it seems to be using :
StorageClass: manual
But if I check all my storage classes, I see only this one :
# kubectl describe storageclass
Name: standard
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile"},"name":"standard","namespace":""},"provisioner":"k8s.io/minikube-hostpath"}
,storageclass.beta.kubernetes.io/is-default-class=true
Provisioner: k8s.io/minikube-hostpath
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
But the one that my-sql is using is not there.
I need help in understanding this please. Where is the PV /mnt/data
?
On my host machine the result to look for /mnt/data
is :
# cd /mnt/data
cd: no such file or directory: /mnt/data
Minikube is a Virtual Machine (VM) based all-in-one solution. So you have one node where the entire control plane lives and also this is your only worker node as well:
$ kubectl get node
NAME STATUS ROLES AGE VERSION
minikube Ready master 1d v1.10.0
Now, it's one node, a VM, that hosts your Kubernetes cluster. So all host-related actions have to be done on said VM:
$ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ ls -al /mnt
total 4
drwxr-xr-x 3 root root 60 Sep 8 12:38 .
drwxr-xr-x 17 root root 460 Sep 8 12:38 ..
drwxr-xr-x 7 root root 4096 Sep 8 12:38 vda1
And here you have your /mnt
directory.