Changing ownership /data/db, input/output error, Kubernetes Mongo Deployment

6/14/2018

I am trying to run a deployment for mongo using minikube. I have created a persistent storage using the following configuration:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: mongo-volume
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  claimRef:
    namespace: default
    name: mongo-claim
  hostPath:
    path: "/test"

The "/test" folder is being mounted using minikube mount <local_path>:/test

Then I created a PV Claim using the following configuration:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mongo-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Mi

Finally, I am trying to create a Service and Deployment with the following configuration:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mongo
spec:
  replicas: 1
  template:
    metadata:
      labels:
        tier: backend
        app: mongo
    spec:
      containers:
      - name: mongo
        image: "mongo"
        envFrom:
          - configMapRef:
              name: mongo-config
        ports:
          - name: mongo-port
            containerPort: 27017 
        volumeMounts:
          - name: mongo-storage
            mountPath: "/data/db"
      volumes:
        - name: mongo-storage
          persistentVolumeClaim:
            claimName: mongo-claim
---
apiVersion: v1
kind: Service
metadata: 
  name: mongo
spec:
  selector:
    app: mongo
  ports:
    - protocol: TCP
      port: 27017
      targetPort: mongo-port

The container quits with an error Changing ownership of '/data/db', Input/Output error.

Question 1) Who is trying to change the ownership of the internal directory of the container? Is it the PV Claim? Question 2) Why the above culprit is trying to mess with the permission of the Mongodb container's default storage path?

-- Arpit Goyal
docker
kubernetes
mongodb

1 Answer

2/4/2019

Looks like it more about virtualbox driver for external folder then k8s itself,

in my scenario

  • i've created a folder on my OS X,
  • mapped that folder to minikube minikube mount data-storage/:/data-storage
  • created PersistentVolume pointing to folder inside minikube
  • created PersistentVolumeClaim pointing to PV above
  • tried to start single simple mongodb using PVC above

and got constantly restarting pods with logs:

Fatal Assertion and fsync: Invalid Argument was fighting for a few hours, and finally found this

https://github.com/mvertes/docker-alpine-mongo/issues/1

which is basically reporting issues with virtualbox driver in case if folder mapped to host.

Once i've mapped PersistentVolume to /data inside of minikube - my pod went live like a charm.

i my case i've decided since minikube is development environment there no reason to be stuck on this

UPDATE:

i wish i would found out this earlier, would save me some time!

docker CE desktop has built in kubernetes!

all you need is to go to the properties and turn it on, that's it no need in virtual box or minikube at all.

and the best thing is that shared folders (on File Sharing tab) - are available for kubernetes - checked with mongodb inside of k8s. And it way faster then minikube (which was failing all the time by the way on my OS X).

Hope it's going to save someone time.

-- user2932688
Source: StackOverflow