Some time ago, I created a ceph cluster with rook on a single node k3s cluster, just to try and it worked very well. I was able to give storage to other pods through cephfs. I followed the example given in the rook quickstart documentation to do this.
However, two days ago, without any intervention on my part, the ceph cluster stopped working. It seems that the ceph manager pod have one issue: my pod rook-ceph-mgr-a-6447569f69-5prdw
crash in loop and here are its events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 41m (x888 over 6h5m) kubelet, localhost Back-off restarting failed container
Warning Unhealthy 36m (x234 over 6h14m) kubelet, localhost Liveness probe failed: Get http://10.42.0.163:9283/: dial tcp 10.42.0.163:9283: connect: connection refused
Warning FailedMount 31m (x2 over 31m) kubelet, localhost MountVolume.SetUp failed for volume "rook-ceph-mgr-a-keyring" : failed to sync secret cache: timed out waiting for the condition
Warning FailedMount 31m (x2 over 31m) kubelet, localhost MountVolume.SetUp failed for volume "rook-ceph-mgr-token-bf88n" : failed to sync secret cache: timed out waiting for the condition
Warning FailedMount 31m (x2 over 31m) kubelet, localhost MountVolume.SetUp failed for volume "rook-config-override" : failed to sync configmap cache: timed out waiting for the condition
Normal Killing 28m (x2 over 30m) kubelet, localhost Container mgr failed liveness probe, will be restarted
Normal Pulled 28m (x3 over 31m) kubelet, localhost Container image "ceph/ceph:v14.2.7" already present on machine
Normal Created 28m (x3 over 31m) kubelet, localhost Created container mgr
Normal Started 28m (x3 over 31m) kubelet, localhost Started container mgr
Warning BackOff 6m47s (x50 over 22m) kubelet, localhost Back-off restarting failed container
Warning Unhealthy 63s (x28 over 30m) kubelet, localhost Liveness probe failed: Get http://10.42.0.163:9283/: dial tcp 10.42.0.163:9283: connect: connection refused
I don’t know if failed to sync secret cache
is the cause or consequence. Is it a rook or k3s issue ?
No output with k3s kubectl logs rook-ceph-mgr-a-6447569f69-5prdw -n rook-ceph
(adding -p change nothing)
Thank you for your help, this is my first question on stackoverflow, hoping it was made correctly :)