K8s mounting persistentVolume failed, "timed out waiting for the condition" on docker-desktop

10/17/2021

When trying to bind a pod to a NFS persistentVolume hosted on another pod, it fails to mount when using docker-desktop. It works perfectly fine elsewhere even with the exact same YAML.

The error:

Events:
  Type     Reason       Age    From               Message
  ----     ------       ----   ----               -------
  Normal   Scheduled    4m59s  default-scheduler  Successfully assigned test-project/test-digit-5576c79688-zfg8z to docker-desktop
  Warning  FailedMount  2m56s  kubelet            Unable to attach or mount volumes: unmounted volumes=[lagg-connection], unattached volumes=[lagg-connection kube-api-access-h68w7]: timed out waiting for the condition
  Warning  FailedMount  37s    kubelet            Unable to attach or mount volumes: unmounted volumes=[lagg-connection], unattached volumes=[kube-api-access-h68w7 lagg-connection]: timed out waiting for the condition

The minified project which you can apply to test yourself:

apiVersion: v1
kind: Namespace
metadata:
  name: test-project
  labels:
    name: test-project
---
apiVersion: v1
kind: Service
metadata:
  labels:
    environment: test
  name: test-lagg
  namespace: test-project
spec:
  clusterIP: 10.96.13.37
  ports:
  - name: nfs
    port: 2049
  - name: mountd
    port: 20048
  - name: rpcbind
    port: 111
  selector:
    app: nfs-server
    environment: test
    scope: backend
---
apiVersion: v1
kind: PersistentVolume
metadata:
  labels:
    environment: test
  name: test-lagg-volume
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 2Gi
  nfs:
    path: /
    server: 10.96.13.37
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    environment: test
  name: test-lagg-claim
  namespace: test-project
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  storageClassName: ""
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: static
    environment: test
    scope: backend
  name: test-digit
  namespace: test-project
spec:
  selector:
    matchLabels:
      app: static
      environment: test
      scope: backend
  template:
    metadata:
      labels:
        app: static
        environment: test
        scope: backend
    spec:
      containers:
      - image: busybox
        name: digit
        imagePullPolicy: IfNotPresent
        command: ['sh', '-c', 'echo Container 1 is Running ; sleep 3600']
        volumeMounts:
        - mountPath: /cache
          name: lagg-connection
      volumes:
      - name: lagg-connection
        persistentVolumeClaim:
          claimName: test-lagg-claim
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    environment: test
  name: test-lagg
  namespace: test-project
spec:
  selector:
    matchLabels:
      app: nfs-server
      environment: test
      scope: backend
  template:
    metadata:
      labels:
        app: nfs-server
        environment: test
        scope: backend
    spec:
      containers:
      - image: gcr.io/google_containers/volume-nfs:0.8
        name: lagg
        ports:
        - containerPort: 2049
          name: lagg
        - containerPort: 20048
          name: mountd
        - containerPort: 111
          name: rpcbind
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /exports
          name: lagg-claim
      volumes:
      - emptyDir: {}
        name: lagg-claim

As well as emptyDir I have also tried hostPath. This setup has worked before, and I'm not sure what I've changed if anything since it has stopped.

-- Ral
docker-desktop
kubernetes
persistent-volume-claims
persistent-volumes

1 Answer

10/17/2021

Updating my Docker for Windows installation from 4.0.1 to 4.1.1 has fixed this problem.

-- Ral
Source: StackOverflow