I'm trying to configure mongodb deployment inside k8s world. My mongo deployment file looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: panel-admin-mongo-deployment
spec:
replicas: 1
selector:
matchLabels:
component: panel-admin-mongo
template:
metadata:
labels:
component: panel-admin-mongo
spec:
volumes:
- name: panel-admin-mongo-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: panel-admin-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: panel-admin-mongo-storage
mountPath: /data/db
Mongo service file:
apiVersion: v1
kind: Service
metadata:
name: panel-admin-mongo-cluster-ip-service
spec:
type: ClusterIP
selector:
component: panel-admin-mongo
ports:
- port: 27017
targetPort: 27017
And my persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
When I'm entering the mongodb container using: kubectl exec -it panel-admin-mongo-deployment-6dcfc5b8c7-mk8d5 sh
and I'm saving some users email and password inside the collection (f.ex. users) everything works fine. But when I shot down the pod and container inside of it, boot up again, the data is gone. Shouldn't be independent of life-cycle of pod? And if yes what am I missing?
First, make sure that you pod success usage PVC:
kubectl describe po/${POD_NAME}
and check Volumes
section:
Volumes:
prometheus-operator-db:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: prometheus-operator-db-0
ReadOnly: false
If you success usage PVC
, need check reclaim-policy for you PV
, this value should be persistentVolumeReclaimPolicy":"Retain"
.
I am not a k8s expert, but your problems is that you are not using Kubernetes statfulsets, have a look here https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
AFAIK, for any persistent deployment you need to make your pods using statefulsets.