97 Mounting command: systemd-run 98 Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/06b9ae42-8e99-11e9-b888-a44c24184b19/volumes/kubernetes.io~nfs/nfs-data --scope -- mount -t nfs 10.100.155.82:/exports/www /var/lib/kubelet/pods/06b9ae42-8e99-11e9-b888-a44c24184b19/volumes/kubernetes.io~nfs/nfs-data 99 Output: Running scope as unit: run-r26f9da6c287846589bec8d059c33441d.scope 100 mount.nfs: Connection timed out
101 FailedMount Warning 2019-06-14T11:48:46Z 102 typo3-app-67b58d7657-cvqdg Pod Unable to mount volumes for pod "typo3-app-67b58d7657-cvqdg_default(1fb4c719-8e9a-11e9-b888-a44c24184b19)": timeout expired waiting for volumes to attach or mount for pod "default"/"typo3-app-67b58d7657-cvqdg". list of unmounted volumes=[nfs-data nfs-data-src]. list of unattached volumes=[nfs-data nfs-data-src default-token-lmtl4] FailedMount Warning 2019-06-14T11:49:04Z
Weirdly I have no idea where it's getting 10.100.155.82
from? This was the the previous IP of the ClusterIP
(relating to the nfs service)...
apiVersion: apps/v1
kind: Deployment
metadata:
name: typo3-app
labels:
app: typo3
spec:
replicas: 1
selector:
matchLabels:
app: typo3
template:
metadata:
labels:
app: typo3
spec:
containers:
- name: app
image: us.gcr.io/objit-chris/chrisjitit-typo3:v11
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html-chrisjitit
name: nfs-data
- mountPath: /var/www/typo3_src-6.2.6
name: nfs-data-src
volumes:
- name: nfs-data
nfs:
# https://github.com/kubernetes/minikube/issues/3417
# server is not resolved using kube dns (so can't resolve to a service name - hence we need the IP)
#server: 10.11.250.37
server: 10.97.78.206
path: /exports/www
- name: nfs-data-src
nfs:
# https://github.com/kubernetes/minikube/issues/3417
# server is not resolved using kube dns (so can't resolve to a service name - hence we need the IP)
#server: 10.11.250.37
server: 10.97.78.206
path: /exports/www/typo3_src
What may be the cause of this timeout / wrong IP being used?
I tried deleting the deployment, changing the name that still didn't seem to work, but few minutes later it worked... Really strange behavior?
Ran into this issue again...:
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-data 10Gi RWO Retain Available 67m
pvc-f1353542-a8b1-11e9-bdf7-38ffa66115bc 10Gi RWO Delete Bound default/nfs-data standard 67m
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-data Bound pvc-f1353542-a8b1-11e9-bdf7-38ffa66115bc 10Gi RWO standard 67m
It keeps picking up an old NFS IP. It's a bug...:
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9470ac17-a8b9-11e9-bdf7-38ffa66115bc/volumes/kubernetes.io~nfs/nfs-data-src --scope -- mount -t nfs 10.11.250.37:/exports/www/typo3_src /var/lib/kubelet/pods/9470ac17-a8b9-11e9-bdf7-38ffa66115bc/volumes/kubernetes.io~nfs/nfs-data-src
Output: Running scope as unit: run-ref2095cb52c94d0c87de5458c3b16733.scope
mount.nfs: Connection timed out