tcp 10.0.2.15:6443: getsockopt: connection refused on Debian 9 VMs

2/22/2019

I am new to kubernetes and trying to setup my testing cluster with one master and two worker nodes on debian 9 virtual machines. so far I have installed kubectl, kubeadm and kubelet on all three nodes along with all the basic requirements. I have also installed Docker Version 17.03.

I am facing problem when I am trying to join my worker node to master node.

   root@Minion1:~# kubeadm join 10.0.2.15:6443 --token kd2o6a.aklftqmvp55m87uf --discovery-token-ca-cert-hash sha256:29bc80e3c298e68077468f00472ae9944597f68374122a2d92e3713262bcf160
    [preflight] Running pre-flight checks.
        [WARNING FileExisting-crictl]: crictl not found in system path
    Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
    [discovery] Trying to connect to API Server "10.0.2.15:6443"
    [discovery] Created cluster-info discovery client, requesting info from "https://10.0.2.15:6443"
[discovery] Failed to request cluster info, will try again: [Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.0.2.15:6443: getsockopt: connection refused]

Here is the kubeadm init step result :

 root@Master:/home/kube# kubeadm init
[init] Using Kubernetes version: v1.10.13
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get [github.com/kubernetes-incubator/cri-tools/cmd/crictl](http://github.com/kubernetes-incubator/cri-tools/cmd/crictl)
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master] and IPs [10.0.2.15]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 24.505250 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node master as master by adding a label and a taint
[markmaster] Master master tainted and labelled with key/value: [node-role.kubernetes.io/master=](http://node-role.kubernetes.io/master=)""
[bootstraptoken] Using token: kd2o6a.aklftqmvp55m87uf
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join [10.0.2.15:6443](http://10.0.2.15:6443) --token kd2o6a.aklftqmvp55m87uf --discovery-token-ca-cert-hash sha256:29bc80e3c298e68077468f00472ae9944597f68374122a2d92e3713262bcf160

Also, on master node I am not able to run any kubectl commands using root user, but I can use them with other user... is this normal ?

Here are the network connectivity checks between master and worker node.

root@Minion1:~# ping 10.0.2.15
PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.068 ms
64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.091 ms
64 bytes from 10.0.2.15: icmp_seq=3 ttl=64 time=0.094 ms
^C
--- 10.0.2.15 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2033ms
rtt min/avg/max/mdev = 0.068/0.084/0.094/0.013 ms


root@Master:/home/kube# netstat -ntplu | grep 6443
tcp6       0      0 :::6443                 :::*                    LISTEN      21075/kube-apiserve

How can I join my worker nodes to master ? Is there anything I can troubleshoot to get rid of this problem.

Thanks for the help..

-- Pert8S
kubeadm
kubectl
kubernetes

2 Answers

2/22/2019

Install go binaries and run the below command to load crictl

go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
-- P Ekambaram
Source: StackOverflow

2/25/2019

You didn't put actually network connectivity on your post. You just ping from worker node to master node. Please check this Documentation to see necessary open ports between master and worker nodes and configure them on firewall. You can troubleshoot with telnet client like

telnet [host] [port]

or with nc

nc -vz [host] [port]
-- coolinuxoid
Source: StackOverflow