Failed to mount a volume on gcePersistentDisk for mongo pod on gke

4/5/2019

I try to run a pod on gke containing a mongo container and mount a persistent volume for data using gcePersistentDisk but it fails to mount.

First, I created the persistent disk by issuing :

gcloud compute disks create --size=1GiB --zone=europe-west3-a mongodb

Then, I created the pod using the following code:

apiVersion: v1
kind: Pod
metadata:
  name: mongodb 
spec:
  volumes:
  - name: mongodb-data
    gcePersistentDisk:
      pdName: mongodb
      fsType: nfs4
  containers:
  - image: mongo
    name: mongodb
    volumeMounts:
    - name: mongodb-data
      mountPath: /data/db
    ports:
    - containerPort: 27017
      protocol: TCP

After a while, when I list pods I get that as a result:

NAME mongodb

READY 0/1    

STATUS ContainerCreating             

RESTARTS 0  

AGE 23m

And as a description of what's happened I get:

Warning  FailedMount             5m (x18 over 26m)  kubelet, gke-mongo-default-pool-02c59988-vmhz  MountVolume.MountDevice failed for volume "mongodb-data" : executable file not found in $PATH

Warning  FailedMount             4m (x10 over 24m)  kubelet, gke-mongo-default-pool-02c59988-vmhz  Unable to mount volumes for pod "mongodb_default(f1625bde-579d-11e9-a35f-42010a8a00a0)": timeout expired waiting for volumes to attach or mount for pod "default"/"mongodb". list of unmounted volumes=[mongodb-data]. list of unattached volumes=[mongodb-data default-token-5dxps]

I still can't figure out why it's still not ready ! Any suggestion please ?

-- Omar L.
google-kubernetes-engine
kubernetes
mongodb

1 Answer

4/5/2019

fsType: ext4 instead of fsType: nfs4, that was the problem !

-- Omar L.
Source: StackOverflow