kubelet saying node "master01" not found

1/23/2019

I try to stack up my kubeadm cluster with three masters. I receive this problem from my init command...

[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

But I do not use no cgroupfs but systemd And my kubelet complain for not knowing his nodename.

Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.251885    5620 kubelet.go:2266] node "master01" not found
Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.352932    5620 kubelet.go:2266] node "master01" not found
Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.453895    5620 kubelet.go:2266] node "master01" not found

Please let me know where is the issue.

-- manzion_111
kubeadm
kubelet
kubernetes

1 Answer

1/23/2019

The issue can be because of docker version, as docker version < 18.6 is supported in latest kubernetes version i.e. v1.13.xx.

Actually I also got the same issue but it get resolved after downgrading the docker version from 18.9 to 18.6.

-- Vikram Jakhar
Source: StackOverflow