Recently, the managed pod in my mongo deployment onto GKE was automatically deleted and a new one was created in its place. As a result, all my db data was lost.
I specified a PV for the deployment and the PVC was bound too, and I used the standard storage class (google persistent disk). The Persistent Volume Claim had not been deleted either.
Here's an image of the result from kubectl get pv
:
pvc
My mongo deployment along with the persistent volume claim and service deployment were all created by using kubernets' kompose
tool from a docker-compose.yml for a prisma 1 + mongodb deployment.
Here are my yamls:
mongo-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose -f docker-compose.yml convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: mongo
name: mongo
namespace: dbmode
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mongo
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose -f docker-compose.yml convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: mongo
spec:
containers:
- env:
- name: MONGO_INITDB_ROOT_PASSWORD
value: prisma
- name: MONGO_INITDB_ROOT_USERNAME
value: prisma
image: mongo:3.6
imagePullPolicy: ""
name: mongo
ports:
- containerPort: 27017
resources: {}
volumeMounts:
- mountPath: /var/lib/mongo
name: mongo
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: mongo
persistentVolumeClaim:
claimName: mongo
status: {}
mongo-persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: mongo
name: mongo
namespace: dbmode
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
mongo-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose -f docker-compose.yml convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: mongo
name: mongo
namespace: dbmode
spec:
ports:
- name: "27017"
port: 27017
targetPort: 27017
selector:
io.kompose.service: mongo
status:
loadBalancer: {}
I've tried checking the contents mounted in /var/lib/mongo
and all I got was an empty lost+found/
folder, and I've tried to search the Google Persistent Disks but there was nothing in the root directory and I didn't know where else to look.
I guess that for some reason the mongo deployment is not pulling from the persistent volume for the old data when it starts a new pod, which is extremely perplexing.
I also have another kubernetes project where the same thing happened, except that the old pod still showed but had an evicted
status.
I've tried checking the contents mounted in /var/lib/mongo and all I got was an empty lost+found/ folder,
OK, but have you checked it was actually saving data there, before the Pod
restart and data loss ? I guess it was never saving any data in that directory.
I checked the image you used by running a simple Pod
:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-pod
image: mongo:3.6
When you connect to it by running:
kubectl exec -ti my-pod -- /bin/bash
and check the default mongo configuration file:
root@my-pod:/var/lib# cat /etc/mongod.conf.orig
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb # 👈
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
you can see among other things that dbPath
is actually set to /var/lib/mongodb
and NOT to /var/lib/mongo
.
So chances are that your mongo wasn't actually saving any data to your PV
i.e. to /var/lib/mongo
directory, where it was mounted, but to /var/lib/mongodb
as stated in its configuration file.
You should be able to check it easily by kubectl exec
to your running mongo pod:
kubectl exec -ti <mongo-pod-name> -- /bin/bash
and verify where the data is saved.
If you didn't overwrite in any way the original config file (e.g. by providing a ConfigMap
), mongo
should save its data to /var/lib/mongodb
and this directory, not being a mount point for your volume, is part of a Pod
filesystem and its ephemeral.
The above mentioned /etc/mongod.conf.orig
is only a template so it doesn't reflect the actual configuration that has been applied.
If you run:
kubectl logs your-mongo-pod
it will show where the data directory is located:
$ kubectl logs my-pod
2020-12-16T22:20:47.472+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=my-pod
2020-12-16T22:20:47.473+0000 I CONTROL [initandlisten] db version v3.6.21
...
As we can see, data is saved in /data/db
:
dbpath=/data/db