Kubernetes: nodes joining a cluster - unable to connect to master

11/28/2019

I've got 3 VMs which can connect.

I've started up 1 master and 2 nodes.

However, I'm not sure what IP address to use here:

sudo kubeadm join <ip address>:6443     --token <token>     --discovery-token-ca-cert-hash <ca-cert-hash>

The actual IP I used to deploy the master (i.e. with kubeadm) was 192.168.56.101. And I can telnet from the node to the master using:

telnet 192.168.56.101 6443

E.g.

telnet 192.168.56.101 6443
Trying 192.168.56.101...
Connected to 192.168.56.101.
Escape character is '^]'.

However trying kubeadm join on the node with that IP does not work. It just hangs.

Any suggestions?

-- Snowcrash
kubernetes

2 Answers

11/28/2019

run 'hostname -i' and grab the IP address. Use it in init command. The master IP address should be reachable from all the nodes

-- P Ekambaram
Source: StackOverflow

11/28/2019

run

kubectl cluster-info

Kubernetes master is running at https://xxx.xxx.xx.xx:6443

KubeDNS is running at https://xxx.xxx.xx.xx:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Kubernetes master IP it's what you are looking for

Did you deployed your CNI network ? Flannel or Calico for example ?

run this command to see if all your master pods are running.

kubectl get pods --all-namespaces

In your Node did you install docker and kubelet ?

-- EAT
Source: StackOverflow