I am trying to setup k8s cluster with master and two worker nodes in Digital Ocean.
My Config: I have created three droplets as follows:
I was able to setup master node successfully
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 139m v1.18.3
I am unable to add worker to master.
Command i ran to join:
$ kubeadm join <PUBLIC IP>:6443 --token <token> --discovery-token-ca-cert-hash <hash>
Token had 23h of validity left at the time of executing the above command.
Error that i got:
W0528 14:13:09.920404 25129 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks error execution phase preflight: couldn't validate the identity of the API Server: Get https://PUBLIC_IP:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) To see the stack trace of this error execute with --v=5 or higher
My observations on this issue:
$ netstat -pnltu
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:40389 0.0.0.0:* LISTEN 25074/kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 25074/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 25478/kube-proxy
tcp 0 0 127.0.0.1:9099 0.0.0.0:* LISTEN 29823/calico-node
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 24580/kube-controll
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 24742/kube-schedule
tcp6 0 0 :::10250 :::* LISTEN 25074/kubelet
tcp6 0 0 :::10251 :::* LISTEN 24742/kube-schedule
tcp6 0 0 :::6443 :::* LISTEN 24725/kube-apiserve
tcp6 0 0 :::10252 :::* LISTEN 24580/kube-controll
tcp6 0 0 :::10256 :::* LISTEN 25478/kube-proxy
Is it because the API service is listening in IPV6 instead of IPV4?
here is the output of cluster-info:
$ kubectl cluster-info
Kubernetes master is running at https://<PUBLIC_IP>:6443
KubeDNS is running at https://<PUBLIC_IP>:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Any help to fix this issue is much appreciated.