Mongodb container's data becomes "read-only" after restarting kubernetes, with glusterfs as storage?

1/24/2017

My mongo is running as a docker container on the kubernetes, with glusterfs providing persistent volume. After I restart kuberntes (the machine power off and restart), all the mongo pods cannot come back, their logs:

chown: changing ownership of `/data/db/user_management.ns': Read-only file system
chown: changing ownership of `/data/db/storage.bson': Read-only file system
chown: changing ownership of `/data/db/local.ns': Read-only file system
chown: changing ownership of `/data/db/mongod.lock': Read-only file system

Here /data/db/ is the mounted gluster volume and I can make sure it's rw mode!:

# kubectl get pod mongoxxx -o yaml
apiVersion: v1
kind: Pod
spec:
  containers:
  - image: mongo:3.0.5
    imagePullPolicy: IfNotPresent
    name: mongo
    ports:
    - containerPort: 27017
      protocol: TCP
    volumeMounts:
    - mountPath: /data/db
      name: mongo-storage
  volumes:
  - name: mongo-storage
    persistentVolumeClaim:
      claimName: auth-mongo-data

# kubectl describe pod mongoxxx
...
    Volume Mounts:
      /data/db from mongo-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wdrfp (ro)
    Environment Variables:  <none>
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  mongo-storage:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  auth-mongo-data
    ReadOnly:   false
...

# kubect get pv xxx
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/bound-by-controller: "yes"
  name: auth-mongo-data
  resourceVersion: "215201"
  selfLink: /api/v1/persistentvolumes/auth-mongo-data
  uid: fb74a4b9-e0a3-11e6-b0d1-5254003b48ea
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 4Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: auth-mongo-data
    namespace: default
  glusterfs:
    endpoints: glusterfs-cluster
    path: infra-auth-mongo
  persistentVolumeReclaimPolicy: Retain
status:
  phase: Bound

And when I ls on the kubernetes node:

# ls -ls /var/lib/kubelet/pods/fc6c9ef3-e0a3-11e6-b0d1-5254003b48ea/volumes/kubernetes.io~glusterfs/auth-mongo-data/
total 163849
    4 drwxr-xr-x. 2 mongo input     4096 122 21:18 journal
65536 -rw-------. 1 mongo input 67108864 122 21:16 local.0
16384 -rw-------. 1 mongo root  16777216 123 17:15 local.ns
    1 -rwxr-xr-x. 1 mongo root         2 123 17:15 mongod.lock
    1 -rw-r--r--. 1 mongo root        69 123 17:15 storage.bson
    4 drwxr-xr-x. 2 mongo input     4096 122 21:18 _tmp
65536 -rw-------. 1 mongo input 67108864 122 21:18 user_management.0
16384 -rw-------. 1 mongo root  16777216 123 17:15 user_management.ns

I cannot chown though the volume is mounted as rw.

My host is CentOs 7.3: Linux c4v160 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux.

-- Haoyuan Ge
centos
docker
glusterfs
kubernetes
mongodb

1 Answer

5/8/2017

I guess that it is because the glusterfs volume I have provided is unclean. The glusterfs volume infra-auth-mongo may consist of dirty directories. One solution is to remove this volume and create another.

Another solution is to hack mongodb dockerfile, force it change the ownership of /data/db before starting mongodb process. Like this: https://github.com/harryge00/mongo/commit/143bfc317e431692010f09b5c0d1f28395d2055b

-- Haoyuan Ge
Source: StackOverflow