kubernetes - Couldn't able to join master node - error execution phase preflight: couldn't validate the identity of the API Server

4/19/2020

I am novice to k8s, so this might be very simple issue for someone with expertise in the k8s.

I am working with two nodes

  1. master - 2cpu, 2 GB memory
  2. worker - 1 cpu, 1 GB memory
  3. OS - ubuntu - hashicorp/bionic64

I did setup the master node successfully and i can see it is up and running

vagrant@master:~$ kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   29m   v1.18.2

Here is token which i have generated

vagrant@master:~$ kubeadm token create --print-join-command
W0419 13:45:52.513532   16403 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 10.0.2.15:6443 --token xuz63z.todnwgijqb3z1vhz     --discovery-token-ca-cert-hash sha256:d4dadda6fa90c94eca1c8dcd3a441af24bb0727ffc45c0c27161ee8f7e883521 

Issue - But when i try to join it from the worker node i get

vagrant@worker:~$ sudo kubeadm join 10.0.2.15:6443 --token xuz63z.todnwgijqb3z1vhz     --discovery-token-ca-cert-hash sha256:d4dadda6fa90c94eca1c8dcd3a441af24bb0727ffc45c0c27161ee8f7e883521 
W0419 13:46:17.651819   15987 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: couldn't validate the identity of the API Server: Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: dial tcp 10.0.2.15:6443: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher

Here are the ports which are occupied

10.0.2.15:2379 
10.0.2.15:2380 
10.0.2.15:68

Note i am using CNI from -

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
-- Rahul Wagh
kubernetes

2 Answers

5/30/2020

I ran into similar issue, problem was my node VM's timezone was different. Corrected the time on node and it worked!

Hope it may help someone.

-- Govind Mantri
Source: StackOverflow

4/29/2020

Here are mistakes which realized i did during my kubernetes installation -

(For detailed installation step follow - Steps for Installation )

But here are the key mistakes which i did -

Mistake 1 - Since i was working on the VMs so i had multiple ethernet adapter on my both the VMs (master as well as worker ). By default the the CNI always takes the eth0 but i our case it should be eth1

1: lo: <LOOPBACK,UP,LOWER_UP>
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:bb:14:75 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:fb:48:77 brd ff:ff:ff:ff:ff:ff
    inet 100.0.0.1
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP>

Mistake 2- The way i was initializing the my kubeadm without --apiserver-advertise-address and --pod-network-cidr

So here is kubeadm command which i used -

[vagrant@master ~]$ sudo kubeadm init --apiserver-advertise-address=100.0.0.1 --pod-network-cidr=10.244.0.0/16

Mistake 3 - - Since we have mulitple ethernet adapter in our VMs so i coudln't find the a way to set up extra args to switch from eth0 to eth1 in calico.yml configuration.

So i used flannel CNI*

[vagrant@master ~]$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

and in the args section added - --iface=eth1

- --iface=eth1
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth1

And it worked after that

-- Rahul Wagh
Source: StackOverflow