Mongodb container fails if have a volume setup and more than one container in workload

5/31/2021

I have setup a mongodb workload in Rancher (2.5.8)

I have setup a volume: volume

The workload start fine if I have the containers set to scale to 1. So 1 container will start and all is fine.

However if I set the workload to have 2 or more containers, one container will start fine, but then the others fail to start.

Here is what my workload looks like if I set it to scale to 2. one container started and running fine, but the second (and third if I have its scale to 3) are failing. enter image description here

If I remove the volume, then 2+ containers will all start up fine, but then data is only being stored within each container (and gets lost whenever I redeploy).

But if I have the volume set, then the data does store in the volume (host), but then can only start one container.

Thank you in advance for any suggestions

Jason

-- Jason
docker-volume
kubernetes
mongodb
rancher

1 Answer

6/7/2021

Posting this community wiki answer to set a baseline and to hopefully show one possible reason that the mongodb is failing.

Feel free to edit/expand.


As there is a lot of information missing from this question like how it was created, how the mongodb was provisioned and there is also lack of logs from the container, the actual issue could be hard to pinpoint.

Assuming that the Deployment was created with a following manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo
spec:
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  replicas: 1 # THEN SCALE TO 3
  selector:
    matchLabels:
      app: mongo  
  template:
    metadata:
      labels:
        app: mongo
    spec:
      containers:
      - name: mongo
        image: mongo
        imagePullPolicy: "Always"
        ports:
        - containerPort: 27017
        volumeMounts:
        - mountPath: /data/db
          name: mongodb
      volumes:
      - name: mongodb
        persistentVolumeClaim:
          claimName: mongo-pvc

The part of the setup where the Volume is referenced could be different (for example hostPath can be used) but the premise of it was:

  • If the Pods are physically referencing the same data/db/mongod it will go into CrashLoopBackOff state.

Following on this topic:

  • $ kubectl get pods
NAME                     READY   STATUS             RESTARTS   AGE
mongo-5d849bfd8f-8s26t   1/1     Running            0          45m
mongo-5d849bfd8f-l6dzb   0/1     CrashLoopBackOff   13         44m
mongo-5d849bfd8f-wgh6m   0/1     CrashLoopBackOff   13         44m
  • $ kubectl logs mongo-5d849bfd8f-l6dzb
<-- REDACTED --> 
{"t":{"$date":"2021-06-05T12:43:58.025+00:00"},"s":"E",  "c":"STORAGE",  "id":20557,   "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"DBPathInUse: Unable to lock the lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory"}}
<-- REDACTED --> 

Citing the O`Reilly site on the mongodb production setup:

Specify an alternate directory to use as the data directory; the default is /data/db/ (or, on Windows, \data\db\ on the MongoDB binary’s volume). Each mongod process on a machine needs its own data directory, so if you are running three instances of mongod on one machine, you’ll need three separate data directories. When mongod starts up, it creates a mongod.lock file in its data directory, which prevents any other mongod process from using that directory. If you attempt to start another MongoDB server using the same data directory, it will give an error:

exception in initAndListen: DBPathInUse: Unable to lock the
     lock file: \ data/db/mongod.lock (Resource temporarily unavailable).
     Another mongod instance is already running on the 
     data/db directory,
     \ terminating`

-- Oreilly.com: Library: View: Mongodb the definitive: Chapter 21


As an alternative approach you can other means to provision mongodb like for example:

-- Dawid Kruk
Source: StackOverflow