I'm fighting since many hours setting my k8s pods on my minikube
single node, at persistent volume creation stage.
This command always ends with error, even if I copy/paste the example spec from kubernetes
documentation :
$kubectl apply -f pv-volume.yml
error: SchemaError(io.k8s.api.core.v1.ScaleIOVolumeSource): invalid object doesn't have additional properties
$cat pv-volume.yml
kind: PersistentVolume
apiVersion: v1
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
I can't figure out why kubectl obliges me to specify ScaleIO
in my spec, while I'm using local volume.
I've the same error specifying storagaClassName
to standard
Any idea about what can be the problem?
My versions :
$minikube version
minikube version: v1.0.0
$kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
I was going from error to error, then try to create another object such as secrets and the same issue.
Then
It looks like the kubectl
upgrade was the key to the solution, 1.10
client version was trying to talk to 1.14
- and the mismatch in API version can explain the weirdness in the error messages. It seemed to be not really minikube
related.
It's now working, I can actually run my kube
commands without errors
In minikube , the dynamic provisioner is already there by default , you just need to create persistent volume claims using that Class.
C02W84XMHTD5:Downloads iahmad$ minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
C02W84XMHTD5:Downloads iahmad$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 4d v1.10.0
C02W84XMHTD5:Downloads iahmad$
C02W84XMHTD5:Downloads iahmad$ kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER AGE
standard (default) k8s.io/minikube-hostpath 4d
C02W84XMHTD5:Downloads iahmad$
C02W84XMHTD5:Downloads iahmad$
so for th data persistence to host , you just need a volume claim and use it on your kubernetes deployment.
example mysql volume claim using the built in minikube storage class.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-volumeclaim
annotations:
volume.beta.kubernetes.io/storage-class: standard
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Usage inside mysql deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-volumeclaim