k8s version: v1.9
env: VirtualBox
os: Coreos
It is 1 node Kubernetes cluster I followed the below steps:
Followed https://rook.io/docs/rook/v0.5/k8s-pre-reqs.html and updated the kubelet with
Environment="RKT_OPTS=--volume modprobe,kind=host,source=/usr/sbin/modprobe \
--mount volume=modprobe,target=/usr/sbin/modprobe \
--volume lib-modules,kind=host,source=/lib/modules \
--mount volume=lib-modules,target=/lib/modules \
--uuid-file-save=/var/run/kubelet-pod.uuid"
Installed ceph utility
rbd -v
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
All rook pods are working but MySQL pod fails with error 'timeout expired waiting for volumes to attach/mount for pod'
➜ kubectl get pod -n rook-system
NAME READY STATUS RESTARTS AGE
rook-agent-rqw6j 1/1 Running 0 21m
rook-operator-5457d48c94-bhh2z 1/1 Running 0 22m
➜ kubectl get pod -n rook
NAME READY STATUS RESTARTS AGE
rook-api-848df956bf-fhmg2 1/1 Running 0 20m
rook-ceph-mgr0-cfccfd6b8-8brxz 1/1 Running 0 20m
rook-ceph-mon0-xdd77 1/1 Running 0 21m
rook-ceph-mon1-gntgh 1/1 Running 0 20m
rook-ceph-mon2-srmg8 1/1 Running 0 20m
rook-ceph-osd-84wmn 1/1 Running 0 20m
➜ kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-6a4c5c2a-127d-11e8-a846-080027b424ef 20Gi RWO Delete Bound default/mysql-pv-claim rook-block 15m
➜ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
mysql-pv-claim Bound pvc-6a4c5c2a-127d-11e8-a846-080027b424ef 20Gi RWO rook-block 15m
kubectl get pods
NAME READY STATUS RESTARTS AGE
wordpress-mysql-557ffc4f69-8zxsq 0/1 ContainerCreating 0 16m
Error when I describe pod : FailedMount Unable to mount volumes for pod "wordpress-mysql-557ffc4f69-8zxsq_default(6a932df1-127d-11e8-a846-080027b424ef)": timeout expired waiting for volumes to attach/mount for pod "default"/"wordpress-mysql-557ffc4f69-8zxsq". list of unattached/unmounted volumes=[mysql-persistent-storage]
Also added the following option to rook-operator.yaml
- name: FLEXVOLUME_DIR_PATH
value: "/var/lib/kubelet/volumeplugins"
Could you please help with this? Please let me know if you need further details. I checked the similar issues but a solution is not working.
Are you using cephfs or rbd volumes as a backend to Ceph? Here are some things to check:
Please confirm that your Pod can communicate with ceph cluster well, this looks like an issue with the communication to the ceph volumes you’re trying to use.
Check that your Ceph volume plugins are set correctly.
What’s the state of # kubectl get pv
?
Take a look at your persistent volumes and claims.
You could also try the Root.io tool, they have good integration with Ceph object storage.