Joining cluster takes forever

3/29/2019

I have set up my master node and I am trying to join a worker node as follows:

kubeadm join 192.168.30.1:6443 --token 3czfua.os565d6l3ggpagw7 --discovery-token-ca-cert-hash sha256:3a94ce61080c71d319dbfe3ce69b555027bfe20f4dbe21a9779fd902421b1a63

However the command hangs forever in the following state:

[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

Since this is just a warning, why does it actually fails?

edit: I noticed the following in my /var/log/syslog

Mar 29 15:03:15 ubuntu-xenial kubelet[9626]: F0329 15:03:15.353432    9626 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Mar 29 15:03:15 ubuntu-xenial systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Mar 29 15:03:15 ubuntu-xenial systemd[1]: kubelet.service: Unit entered failed state.
-- pkaramol
kubeadm
kubernetes

4 Answers

4/2/2019

I had a bunch of k8s deployment scripts that broke recently with this same error message... it looks like docker changed it's install. Try this --

previous install: apt-get isntall docker-ce

updated install: apt-get install docker-ce docker-ce-cli containerd.io

-- Master Splinter
Source: StackOverflow

3/29/2019

I did get the same error on CentOS 7 but in my case join command worked without problems, so it was indeed just a warning.

>  [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker
> cgroup driver. The recommended driver is "systemd". Please follow the
> guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading
> configuration from the cluster... [preflight] FYI: You can look at
> this config file with 'kubectl -n kube-system get cm kubeadm-config
> -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace

As the official documentation mentions, there are two common issues that make the init hang (I guess it also applies to join command):

the default cgroup driver configuration for the kubelet differs from that used by Docker. Check the system log file (e.g. /var/log/message) or examine the output from journalctl -u kubelet. If you see something like the following:

First try the steps from official documentation and if that does not work please provide more information so we can troubleshoot further if needed.

-- aurelius
Source: StackOverflow

9/8/2019

First if you want to see more detail when your worker joins to the master use:

kubeadm join 192.168.1.100:6443 --token m3jfbb.wq5m3pt0qo5g3bt9     --discovery-token-ca-cert-hash sha256:d075e5cc111ffd1b97510df9c517c122f1c7edf86b62909446042cc348ef1e0b --v=2

Using the above command I could see that my worker could not established connection with the master, so i just stoped the firewall:

systemctl stop firewalld 
-- Christian Altamirano Ayala
Source: StackOverflow

4/1/2019

The problem had to do with kubeadm not installing a networking CNI-compatible solution out of the box;

Therefore, without this step the kubernetes nodes/master are unable to establish any form of communication;

The following task addressed the issue:

- name: kubernetes.yml --> Install Flannel
  shell: kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
  become: yes
  environment:
    KUBECONFIG: "/etc/kubernetes/admin.conf"
  when: inventory_hostname in (groups['masters'] | last)
-- pkaramol
Source: StackOverflow