MountVolume.SetUp failed for volume "mongo-two": lstat /var/lib/mongo: no such file or directory

12/6/2018

I am trying to get mongo-replicaset chart working.

Kubelet report this error while mongo-replicaset chart deployment:

MountVolume.SetUp failed for volume "mongo-two": lstat /mongo/data: no such file or directory

On each node, /mongo/data folder exist, driving me crazy. Note: on nodes, lstat command doesn't exists, but I suspect kubelet container to bring it.? enter image description here

I have 3 persistent volumes:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongo-[one/two/three]
spec:
  capacity:
    storage: 40Gi
  accessModes:
  - ReadWriteOnce
  storageClassName: local-storage
  local:
    path: /mongo/data
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - [one/two/three]

StatefulSet yaml: (mongo-replicaset helm chart 3.8.0)

...
  volumeMounts:
    - mountPath: /data/db
      name: datadir
...
  volumeClaimTemplates:
  - metadata:
      creationTimestamp: null
      name: datadir
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: local-storage
      resources:
        requests:
          storage: 40Gi
...

Still having "no such directory" on an existing directory.. What's incorrect? I can give additionnal data if needed.

Thanks you

-- Slim
kubernetes
persistent-volumes

2 Answers

12/6/2018

In your statefulset, you must be having volume provisioner with subPath field. If you remove that subPath field from your statefulset yaml file you will not encounter this error.

The issue is there is bug in hostpath volume provisioner that encounters an error with "lstat: no such file or directory" if there is subpath field present in deployment/statefulset, even if the field is empty. This error doesn't let statefulset come up and they goes into containerCreatingConfigErr (happened with me on kubeadm)

For more info you can visit this link

https://github.com/kubernetes/minikube/issues/2256

-- Prafull Ladha
Source: StackOverflow

12/7/2018

The problem came from kubelet which is containerized (because of Rancher installation)

I add a volume definition into kubelet containers and it's OK.

For those interested by creating persistent local volume in a Rancher - Kubernetes installation, just add this to your cluster yaml in order to kubelet can mount your volume:

services:
  kubelet:
    extra_binds:
       - /path_to_mount:/path_to_mount:rshared

don't forget the two dot rshared.

-- Slim
Source: StackOverflow