Kubernetes - MountVolume.NewMounter initialization failed for volume "<volume-name>" : path does not exist

7/23/2021

I am trying to setup a Local Persistent volume using local storage using WSL. But the pod STATUS stops at Pending.

The kubectl describe pod <pod-name> gives below error.

Warning FailedMount 21s (x7 over 53s) kubelet MountVolume.NewMounter initialization failed for volume "pv1" : path "/mnt/data" does not exist

The path /mnt/data has been created and exists on the local machine but cannot be accessed by the container.

and the pod and Persistent volume configuration as below.

apiVersion : v1
kind : PersistentVolume
metadata :
   name : pv1
spec :
  capacity :
    storage : 2Gi
  accessModes :
    - ReadWriteOnce
  persistentVolumeReclaimPolicy : Retain
  storageClassName : local-storage
  local : 
    fsType : ext4
    path : /mnt/data
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node1

---

apiVersion : v1
kind : PersistentVolumeClaim
metadata : 
  name : pvc1
spec :
  resources :
    requests :
      storage : 1Gi
  accessModes :
    - ReadWriteOnce
  storageClassName : local-storage

---

apiVersion : v1
kind : Pod
metadata :
  name : pod1
spec :
  containers:
  - name: www
    image: nginx:alpine
    ports:
      - containerPort: 80
        name: www
    volumeMounts:
      - name: www-store
        mountPath: /usr/share/nginx/html
  volumes :
    - name : www-store
      persistentVolumeClaim :
        claimName : pvc1

Any help would be appreciated.

-- Kodi
kubernetes
kubernetes-pod
persistent-volume-claims
persistent-volumes

2 Answers

1/25/2022

If running on a Rancher Kubernetes Engine (RKE) cluster, this problem can arise from the fact that each kubelet also runs as a container. Thus, it does see the filesystem of the node it runs on.

The solution is to add extra bind mounts (for kubelet service) when configuring the cluster in cluster.yml. For example, to have /data-1 on the node to be mounted as /data-1 on the kubelet:

services:
  ...
  kubelet: 
    extra_binds:
    - "/data-1:/data-1"
-- Michail Alexakis
Source: StackOverflow

7/23/2021

You are using nodeSelector for the pv, telling it to use node1 for the volume , chances are 1. node1 does not have /mnt/data directory present, which is hostPath for the volume. OR 2. node1 may be having /mnt/data, but the pod is getting scheduled on some other node which does not have /mnt/data directory:

apiVersion : v1
kind : PersistentVolume
metadata :
   name : pv1
spec :
  capacity :
    storage : 2Gi
  accessModes :
    - ReadWriteOnce
  persistentVolumeReclaimPolicy : Retain
  storageClassName : local-storage
  local : 
    fsType : ext4
    path : /mnt/data
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node1

Solution: Make sure /mnt/data is present on all schedulable nodes

OR

Try modifying your file to add, nodeName or nodeSelector or nodeAffinity to force it to get scheduled on the same node which have proper hostPath. In below example, it is assumed that node1 have /mnt/data directory present.

apiVersion : v1
kind : Pod
metadata :
  name : pod1
spec :
  nodeName: node1 #<------------this
  containers:
  - name: www
    image: nginx:alpine
    ports:
      - containerPort: 80
        name: www
    volumeMounts:
      - name: www-store
        mountPath: /usr/share/nginx/html
  volumes :
    - name : www-store
      persistentVolumeClaim :
        claimName : pvc1
-- P....
Source: StackOverflow