I create nfs pvc. And I used it in my Deployment YAML. but when I apply the YAML file. I get the following error
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/341e752b-acbb-4c26-a664-a25b5f1dd28b/volumes/kubernetes.io~nfs/data-certification-files-pv --scope -- mount -t nfs 192.168.123.95:/mnt/nfs/certification /var/lib/kubelet/pods/341e752b-acbb-4c26-a664-a25b5f1dd28b/volumes/kubernetes.io~nfs/data-certification-files-pv
Output: Running scope as unit run-25571.scope.
mount.nfs: Stale file handle
Warning FailedMount 13m kubelet, server08 MountVolume.SetUp failed for volume "data-certification-files-pv" : mount failed: exit status 32
then i check the kubelet log, i get this
6月 11 17:06:56 server08 kubelet[16597]: E0611 17:06:56.977097 16597 kubelet.go:1681] Unable to attach or mount volumes for pod "weedfs-master-7b95f44996-xt427_runsdata(341e752b-acbb-4c26-a664-a25b5f1dd28b)": unmounted volumes=[data], unattached volumes=[default-token-7xtzl log data]: timed out waiting for the condition; skipping pod6月 11 17:06:56 server08 kubelet[16597]: E0611 17:06:56.977150 16597 pod_workers.go:191] Error syncing pod 341e752b-acbb-4c26-a664-a25b5f1dd28b ("weedfs-master-7b95f44996-xt427_runsdata(341e752b-acbb-4c26-a664-a25b5f1dd28b)"), skipping: unmounted volumes=[data], unattached volumes=[default-token-7xtzl log data]: timed out waiting for the condition
6月 11 17:07:23 server08 kubelet[16597]: E0611 17:07:23.636510 16597 mount_linux.go:150] Mount failed: exit status 326月 11 17:07:23 server08 kubelet[16597]: Mounting command: systemd-run
6月 11 17:07:23 server08 kubelet[16597]: Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/341e752b-acbb-4c26-a664-a25b5f1dd28b/volumes/kubernetes.io~nfs/data-certification-files-pv --scope -- mount -t nfs 192.168.123.95:/mnt/nfs/certification /var/lib/kubelet/pods/341e752b-acbb-4c26-a664-a25b5f1dd28b/volumes/kubernetes.io~nfs/data-certification-files-pv
6月 11 17:07:23 server08 kubelet[16597]: Output: Running scope as unit run-30998.scope.6月 11 17:07:23 server08 kubelet[16597]: mount.nfs: Stale file handle6月 11 17:07:23 server08 kubelet[16597]: E0611 17:07:23.637271 16597 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/nfs/341e752b-acbb-4c26-a664-a25b5f1dd28b-data-certification-files-pv podName:341e752b-acbb-4c26-a664-a25b5f1dd28b nodeName:}" failed. No retries permitted until 2020-06-11 17:09:25.637186297 +0800 CST m=+7477.442805373 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"data-certification-files-pv\" (UniqueName: \"kubernetes.io/nfs/341e752b-acbb-4c26-a664-a25b5f1dd28b-data-certification-files-pv\") pod \"weedfs-master-7b95f44996-xt427\" (UID: \"341e752b-acbb-4c26-a664-a25b5f1dd28b\") : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/341e752b-acbb-4c26-a664-a25b5f1dd28b/volumes/kubernetes.io~nfs/data-certification-files-pv --scope -- mount -t nfs 192.168.123.95:/mnt/nfs/certification /var/lib/kubelet/pods/341e752b-acbb-4c26-a664-a25b5f1dd28b/volumes/kubernetes.io~nfs/data-certification-files-pv\nOutput:Running scope as unit run-30998.scope.\nmount.nfs: Stale file handle\n"
i check the mountpoint:
[root@server08 pods]# ll /var/lib/kubelet/pods/341e752b-acbb-4c26-a664-a25b5f1dd28b/volumes/kubernetes.io~nfs/data-certification-files-pv
ls: cannot access /var/lib/kubelet/pods/341e752b-acbb-4c26-a664-a25b5f1dd28b/volumes/kubernetes.io~nfs/data-certification-files-pv: No such file or directory
[root@server08 pods]# ll /var/lib/kubelet/pods/341e752b-acbb-4c26-a664-a25b5f1dd28b/volumes/kubernetes.io~nfs
total 0
other info: kubeadm version 1.18 kubelet 1.18
Why doesn't Kubelet create a mount point directory?
I don't know why, I replaced the specific IP with the wildcard *, and restart NFS and it works