I'm trying to run postgres using kubedb on minikube where I mount my data from a local directory (located on my Mac), when the pod runs the I don't get the expected behaviour, two things happen: One is obviously the mount isn't there, and second I see the error pod has unbound immediate PersistentVolumeClaims
First, here are my yaml file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: adminvol
namespace: demo
labels:
release: development
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /Users/myusername/local_docker_poc/admin/lib/postgresql/data
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: demo
name: adminpvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
selector:
matchLabels:
release: development
apiVersion: kubedb.com/v1alpha1
kind: Postgres
metadata:
name: quick-postgres
namespace: demo
spec:
version: "10.2-v2"
storageType: Durable
storage:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeMounts:
- mountPath: /busy
name: naim
persistentVolumeClaim:
claimName: adminpvc
terminationPolicy: WipeOut
According to this which is reflected in the answer below, I've removed the storageClass from all my yaml files.
The describe pod looks like this:
Name: quick-postgres-0
Namespace: demo
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Wed, 25 Sep 2019 22:18:44 +0300
Labels: controller-revision-hash=quick-postgres-5d5bcc4698
kubedb.com/kind=Postgres
kubedb.com/name=quick-postgres
kubedb.com/role=primary
statefulset.kubernetes.io/pod-name=quick-postgres-0
Annotations: <none>
Status: Running
IP: 172.17.0.7
Controlled By: StatefulSet/quick-postgres
Containers:
postgres:
Container ID: docker://6bd0946f8197ddf1faf7b52ad0da36810cceff4abb53447679649f1d0dba3c5c
Image: kubedb/postgres:10.2-v3
Image ID: docker-pullable://kubedb/postgres@sha256:9656942b2322a88d4117f5bfda26ee34d795cd631285d307b55f101c2f2cb8c8
Port: 5432/TCP
Host Port: 0/TCP
Args:
leader_election
--enable-analytics=true
--logtostderr=true
--alsologtostderr=false
--v=3
--stderrthreshold=0
State: Running
Started: Wed, 25 Sep 2019 22:18:45 +0300
Ready: True
Restart Count: 0
Environment:
APPSCODE_ANALYTICS_CLIENT_ID: 90b12fedfef2068a5f608219d5e7904a
NAMESPACE: demo (v1:metadata.namespace)
PRIMARY_HOST: quick-postgres
POSTGRES_USER: <set to the key 'POSTGRES_USER' in secret 'quick-postgres-auth'> Optional: false
POSTGRES_PASSWORD: <set to the key 'POSTGRES_PASSWORD' in secret 'quick-postgres-auth'> Optional: false
STANDBY: warm
STREAMING: asynchronous
LEASE_DURATION: 15
RENEW_DEADLINE: 10
RETRY_PERIOD: 2
Mounts:
/dev/shm from shared-memory (rw)
/var/pv from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from quick-postgres-token-48rkd (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-quick-postgres-0
ReadOnly: false
shared-memory:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
quick-postgres-token-48rkd:
Type: Secret (a volume populated by a Secret)
SecretName: quick-postgres-token-48rkd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 39s default-scheduler pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 39s default-scheduler Successfully assigned demo/quick-postgres-0 to minikube
Normal Pulled 38s kubelet, minikube Container image "kubedb/postgres:10.2-v3" already present on machine
Normal Created 38s kubelet, minikube Created container
Normal Started 38s kubelet, minikube Started container
I followed the official manual on how to mount a pvc here For debug, I used the same pv and pvc to mount a simple busybox container and it worked fine, that is I can see the mount with data in it:
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: demo
spec:
containers:
- name: busybox
image: busybox
command:
- sleep
- "3600"
volumeMounts:
- mountPath: /busy
name: adminpvc
volumes:
- name: adminpvc
persistentVolumeClaim:
claimName: adminpvc
The only difference with my own pod and that of the KubeDB (which to my understanding there's a statefulset behind it) is that I kept the storageClass in the PV and PVC ! if I remove the storage class, I will see the mount point inside the container but it's empty and has no data
Remove the storageClass-line from the PersistentVolume
In minikube try something like this :
here is the example for elasticsearch
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch
spec:
capacity:
storage: 400Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/elasticsearch/"
For more details you can also check this out : pod has unbound PersistentVolumeClaims
EDIT :
check available storageclasses
kubectl get storageclass
For PV volume
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/postgres-pv
PVC file
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pvc
labels:
type: local
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
volumeName: postgres-pv
You are using the custom Postgres
-resource of kubedb.com/v1alpha1
.
They define a custom way to handle storage. It seems like you must set the spec.storage.storageClassName
-key since a
"PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on."
StorageClass
to choose?Since you're using Minikube i recommend you sticking with Minikube's minikube-hostpath
. You can check if it's available:
$ kubectl get storageclass
NAME PROVISIONER AGE
standard (default) k8s.io/minikube-hostpath 2m36s
It supports dynamic provisioning and is set as default StorageClass.
Try to set the spec.storage.storageClassName: minikube-hostpath
and update your volumes accordingly.