CrashLoopBackOff while increasing replicas count more than 1 on Azure AKS cluster for MongoDB image

6/4/2019

Click here to get error screen

I am deploying MongoDb to Azure AKS with Azure File Share as Volume (using persistent volume & persistent volume claim). If I am increasing replicas more than one then CrashLoopBackOff is occurring. Only one Pod is getting created, other are getting failed.

My Docker file to Create MongoDb image.

FROM ubuntu:16.04

RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927

RUN echo "deb http://repo.mongodb.org/apt/ubuntu $(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list

RUN apt-get update && apt-get install -y mongodb-org

EXPOSE 27017

ENTRYPOINT ["/usr/bin/mongod"]

YAML file for Deployment

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: mongo
  labels:
    name: mongo
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: mongo
    spec:      
      containers:
      - name: mongo
        image: <my image of mongodb>
        ports:
        - containerPort: 27017
          protocol: TCP
          name: mongo 
        volumeMounts:
        - mountPath: /data/db
          name: az-files-mongo-storage
      volumes:
      - name: az-files-mongo-storage
        persistentVolumeClaim:
          claimName: mong-pvc
      ---
apiVersion: v1
kind: Service
metadata:
  name: mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  selector:
    app: mongo
-- Roushan
azure
kubernetes
mongodb
ubuntu

3 Answers

6/4/2019

You can configure accessModes: - ReadWriteMany. But still, the volume or storage type should support this mode. Find the table here

According to that table, AzureFile supports ReadWriteMany but not AzureDisk.

-- hariK
Source: StackOverflow

6/4/2019

you should be using StatefulSets for mongodb. deployments are for stateless services.

-- P Ekambaram
Source: StackOverflow

6/4/2019

For your issue, you can take a look at another issue for the same error. So it seems you cannot initialize the same volume when another has already done it for mongo. From the error, I will suggest you just use the volume to store the data. You can initialize in the Dockerfile when creating the image. Or you can use the create volumes for every pod through the StatefulSets and it's more recommended.

Update:

The yam file below will work for you:

apiVersion: v1
kind: Service
metadata:
  name: mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  selector:
    app: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongo
spec:
  selector:
    matchLabels:
      app: mongo 
  serviceName: mongo
  replicas: 3 
  template:
    metadata:
      labels:
        app: mongo 
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: mongo
        image: charlesacr.azurecr.io/mongodb:v1
        ports:
        - containerPort: 27017
          name: mongo
        volumeMounts:
        - name: az-files-mongo-storage
          mountPath: /data/db
  volumeClaimTemplates:
    - metadata:
        name: az-files-mongo-storage
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: az-files-mongo-storage
        resources:
          requests:
            storage: 5Gi

And you need to create the StorageClass before you create the statefulSets. The yam file below:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: az-files-mongo-storage
provisioner: kubernetes.io/azure-file
mountOptions:
  - dir_mode=0777
  - file_mode=0777
  - uid=1000
  - gid=1000
parameters:
  skuName: Standard_LRS

Then the pods work well and the screenshot below:

enter image description here

-- Charles Xu
Source: StackOverflow