How to fix "The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) error

1/29/2019

I'm setting up a new kubernetes setup in my local PC having below specifications. While trying to initiate Kubernetes cluster I'm facing some issues. Need your inputs.

OS version: Linux server.cent.com 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 3enter code here`0 12:09:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Docker version: Docker version 1.13.1, build 07f3374/1.13.1

[root@server ~]# rpm -qa |grep -i kube
kubectl-1.13.2-0.x86_64
kubernetes-cni-0.6.0-0.x86_64
kubeadm-1.13.2-0.x86_64
kubelet-1.13.2-0.x86_64

Issue facing is:

[root@server ~]# kubeadm init --apiserver-advertise-address=192.168.203.154 --pod-network-cidr=10.244.0.0/16
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

Kubelet status:

Jan 29 09:34:09 server.cent.com kubelet[10994]: E0129 09:34:09.354902   10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:09 server.cent.com kubelet[10994]: E0129 09:34:09.456166   10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:09 server.cent.com kubelet[10994]: E0129 09:34:09.558500   10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:09 server.cent.com kubelet[10994]: E0129 09:34:09.660833   10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:09 server.cent.com kubelet[10994]: E0129 09:34:09.763840   10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:09 server.cent.com kubelet[10994]: E0129 09:34:09.867118   10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:09 server.cent.com kubelet[10994]: E0129 09:34:09.968783   10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:10 server.cent.com kubelet[10994]: E0129 09:34:10.071722   10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:10 server.cent.com kubelet[10994]: E0129 09:34:10.173396   10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:10 server.cent.com kubelet[10994]: E0129 09:34:10.274892   10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:10 server.cent.com kubelet[10994]: E0129 09:34:10.292021   10994 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192
Jan 29 09:34:10 server.cent.com kubelet[10994]: E0129 09:34:10.328447   10994 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubeenter code herelet.go:453: Failed to list *v1.Node: Get https://192.168.20?
Jan 29 09:34:10 server.cent.com kubelet[10994]: E0129 09:34:10.3`  `29742   10994 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168
Jan 29 09:34:10 server.cent.com kubelet[10994]: E0129 09:34:10.376238   10994 kubelet.go:2266] node "server.cent.com" not found

I have tried the same in all these versions, but same issue: 1.13.2, 1.12.0, 1.11.0, 1.10.0, and 1.9.0

-- Vignesh M
kubernetes

1 Answer

1/30/2019

As per your outputs seems kubelet service is not able to establish connection to Kubernetes api server, therefore it hasn't passed health check during installation. The reasons might be different, however I do suggest to wipe your current kubeadm setup and proceed with installation from scratch, good tutorial for that you can find within a similar case, or even you can follow official Kubernetes kubeadm installation guidelines.

For investigation purposes you can use Kubeadm Troubleshooting guide.

In case you have any doubts about installation steps or any other related questions just write a comment below this answer.

-- mk_sta
Source: StackOverflow