mongo setup in k8s not using persistent volume

7/2/2019

I'm trying to mount a local folder as the /data/db of mongo in my minikube cluster. So far no luck :(

So, I followed these steps. They describe how to create a persistent volume, a persistent volume claim, a service and a pod.

The config files make sense, but when I eventually spin up the pod, it restarts due to an error and then it keeps running. The log from the pod (kubectl log mongo-0) is

2019-07-02T13:51:49.177+0000 I CONTROL  [main] note: noprealloc may hurt performance in many applications
2019-07-02T13:51:49.180+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=mongo-0
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] db version v4.0.10
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] git version: c389e7f69f637f7a1ac3cc9fae843b635f20b766
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] modules: none
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] build environment:
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten]     distarch: x86_64
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] options: { net: { bindIp: "0.0.0.0" }, storage: { mmapv1: { preallocDataFiles: false, smallFiles: true } } }
2019-07-02T13:51:49.186+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-07-02T13:51:49.186+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=483M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-07-02T13:51:51.913+0000 I STORAGE  [initandlisten] WiredTiger message [1562075511:913047][1:0x7ffa7b8fca80], txn-recover: Main recovery loop: starting at 3/1920 to 4/256
2019-07-02T13:51:51.914+0000 I STORAGE  [initandlisten] WiredTiger message [1562075511:914009][1:0x7ffa7b8fca80], txn-recover: Recovering log 3 through 4
2019-07-02T13:51:51.948+0000 I STORAGE  [initandlisten] WiredTiger message [1562075511:948068][1:0x7ffa7b8fca80], txn-recover: Recovering log 4 through 4
2019-07-02T13:51:51.976+0000 I STORAGE  [initandlisten] WiredTiger message [1562075511:976820][1:0x7ffa7b8fca80], txn-recover: Set global recovery timestamp: 0
2019-07-02T13:51:51.979+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2019-07-02T13:51:51.986+0000 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
2019-07-02T13:51:51.986+0000 I CONTROL  [initandlisten] 
2019-07-02T13:51:51.986+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-07-02T13:51:51.986+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-07-02T13:51:51.986+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-07-02T13:51:51.986+0000 I CONTROL  [initandlisten] 
2019-07-02T13:51:52.003+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2019-07-02T13:51:52.005+0000 I NETWORK  [initandlisten] waiting for connections on port 27017

If I connect to the MongoDB/pod, mongo is just running fine! But, it is not using the persistent volume. Here is my pv.yaml:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: mongo-pv
   labels:
     type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/k8s/mongo"

Inside the mongo pod is see the mongo files in /data/db but on my local machine (/k8s/mongo) the folder is empty.

Below I'll also list the persistent volume claim (pvc) and pod/service yaml

pvc.yaml:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mongo-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

mongo.yaml:

apiVersion: v1
kind: Service
metadata:
  name: mongo
  labels:
    name: mongo
spec:
  clusterIP: None
  ports:
  - port: 27017
    targetPort: 27017
  selector:
    role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo
spec:
  serviceName: "mongo"
  replicas: 1
  template:
    metadata:
      labels:
        role: mongo
        environment: test
    spec:
      terminationGracePeriodSeconds: 10
      volumes:
        - name: mongo-pv-storage
          persistentVolumeClaim:
            claimName: mongo-pv-claim
      containers:
        - name: mongo
          image: mongo
          command:
            - mongod
            - "--bind_ip"
            - 0.0.0.0
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-pv-storage
              mountPath: /data/db

I've also tried, instead of using persistentVolumeClaim to do

volumes:
  - name: mongo-pv-storage
    hostPath:
      path: /k8s/mongo

Gives same issues except there is no error during creation.

Any suggestion what the problem might be or where to look next for more details?

Also, how are the PV and the PVC connected?

-- Jeanluca Scaljeri
docker
kubernetes
minikube
mongodb

3 Answers

7/5/2019

I can confirm that it does work in k8s docker-for-desktop environment. So the issue is related to minikube. I've tested minikube with hyperkit and vritualbox driver. In both cases the files written to /data/db are not visible in the local folder (/k8s/mongo)

-- Jeanluca Scaljeri
Source: StackOverflow

7/2/2019

Please try this

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo
  app: mongodb
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo:3
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-volume
              mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent-volume
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 5Gi

you can create whole new PVC and use that here or change the name. This is working for me also i faced same issue to configure mongoDB when passing commands. remove the commands and try it.

For more details check this github

-- Harsh Manvar
Source: StackOverflow

7/2/2019

Some suggesstions (may/may not help)

Change your storage class name to String:

storageClassname: "manual"

This one is very weird but it worked for me, make sure your path /k8s/mongo has correct permissions. chmod 777 /k8s/mongo

-- DuDoff
Source: StackOverflow