I brought my Kubernetes 1.18 on Centos 7. We are also using customized CIDR using
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=IPALLOC
Initially, Kubernetes came up properly. But when we running the same pod multiple times for testing Kubelet get restarted every 5s.
When I checked,
Kubectl get events
7m43s Normal Starting node/rajasvm Starting kubelet.
7m43s Normal NodeHasSufficientMemory node/rajasvm Node rajasvm status is now: NodeHasSufficientMemory
7m43s Normal NodeHasNoDiskPressure node/rajasvm Node rajasvm status is now: NodeHasNoDiskPressure
7m43s Normal NodeHasSufficientPID node/rajasvm Node rajasvm status is now: NodeHasSufficientPID
7m26s Normal Starting node/rajasvm Starting kubelet.
7m26s Normal NodeHasSufficientMemory node/rajasvm Node rajasvm status is now: NodeHasSufficientMemory
7m26s Normal NodeHasNoDiskPressure node/rajasvm Node rajasvm status is now: NodeHasNoDiskPressure
7m26s Normal NodeHasSufficientPID node/rajasvm Node rajasvm status is now: NodeHasSufficientPID
7m9s Normal Starting node/rajasvm Starting kubelet.
7m9s Warning ImageGCFailed node/rajasvm failed to get imageFs info: unable to find data in memory cache
6m52s Normal Starting node/rajasvm Starting kubelet.
6m35s Normal Starting node/rajasvm Starting kubelet.
6m35s Normal NodeHasSufficientMemory node/rajasvm Node rajasvm status is now: NodeHasSufficientMemory
6m35s Normal NodeHasNoDiskPressure node/rajasvm Node rajasvm status is now: NodeHasNoDiskPressure
6m35s Normal NodeHasSufficientPID node/rajasvm Node rajasvm status is now: NodeHasSufficientPID
journalctl -u kubelet | grep -i garbage
May 27 17:00:05 rajasvm kubelet[20241]: E0527 17:00:05.374190 20241 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
May 27 17:00:22 rajasvm kubelet[20401]: E0527 17:00:22.152485 20401 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
May 27 17:00:39 rajasvm kubelet[20548]: E0527 17:00:39.141443 20548 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
May 27 17:00:55 rajasvm kubelet[20693]: E0527 17:00:55.953994 20693 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
May 27 17:01:12 rajasvm kubelet[20848]: E0527 17:01:12.668267 20848 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
May 27 17:01:29 rajasvm kubelet[20994]: E0527 17:01:29.676793 20994 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
May 27 17:01:46 rajasvm kubelet[21136]: E0527 17:01:46.367956 21136 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
May 27 17:02:03 rajasvm kubelet[21282]: E0527 17:02:03.181850 21282 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
May 27 17:02:03 rajasvm kubelet[21282]: E0527 17:02:03.181865 21282 kubelet.go:1301] Image garbage collection failed multiple times in a row: failed to get imageFs info: unable to find data in memory cache
Please let me know how to solve this issue.
I got the solution, it looks like deleted docker images somehow didn't clean up properly.No idea about this. But below solution worked for me
docker system prune
systemctl stop kubelet
systemctl stop docker
systemctl start docker
systemctl start kubelet