Kubeadm - no port 6443 after cluster creation

8/2/2018

I'm trying to create Kubernetes HA cluster using kubeadm. Kubeadm version: v.1.11.1

I'm using following instructions: kubeadm ha

All passed ok, except the final point. Nodes can't see each other on port 6443.

sudo netstat -an | grep 6443

Shows nothing.

In journalctl -u kubelet I see following error:

reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://<LB>:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-19-111-200.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.19.111.200:6443: connect: connection refused

List of docker runs on instance:

sudo docker ps

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
e3eabb527a92        0e4a34a3b0e6           "kube-scheduler --ad…"   19 hours ago        Up 19 hours                             k8s_kube-scheduler_kube-scheduler-ip-172-19-111-200.ec2.internal_kube-system_31eabaff7d89a40d8f7e05dfc971cdbd_1
123e78fa73c7        55b70b420785           "kube-controller-man…"   19 hours ago        Up 19 hours                             k8s_kube-controller-manager_kube-controller-manager-ip-172-19-111-200.ec2.internal_kube-system_85384ca66dd4dc0adddc63923e2425a8_1
e0aa05e74fb9        1d3d7afd77d1           "/usr/local/bin/kube…"   19 hours ago        Up 19 hours                             k8s_kube-proxy_kube-proxy-xh5dg_kube-system_f6bc49bc-959e-11e8-be29-0eaa4481e274_0
f5eac0b8fe7b        k8s.gcr.io/pause:3.1   "/pause"                 19 hours ago        Up 19 hours                             k8s_POD_kube-proxy-xh5dg_kube-system_f6bc49bc-959e-11e8-be29-0eaa4481e274_0
541011b3e83a        k8s.gcr.io/pause:3.1   "/pause"                 19 hours ago        Up 19 hours                             k8s_POD_etcd-ip-172-19-111-200.ec2.internal_kube-system_84d934eebaace20c70e0f268eb100028_0
a5e203947686        k8s.gcr.io/pause:3.1   "/pause"                 19 hours ago        Up 19 hours                             k8s_POD_kube-scheduler-ip-172-19-111-200.ec2.internal_kube-system_31eabaff7d89a40d8f7e05dfc971cdbd_0
89dbcdda659c        k8s.gcr.io/pause:3.1   "/pause"                 19 hours ago        Up 19 hours                             k8s_POD_kube-apiserver-ip-172-19-111-200.ec2.internal_kube-system_4202bb793950ae679b2a433ea8711d18_0
5948e629d90e        k8s.gcr.io/pause:3.1   "/pause"                 19 hours ago        Up 19 hours                             k8s_POD_kube-controller-manager-ip-172-19-111-200.ec2.internal_kube-system_85384ca66dd4dc0adddc63923e2425a8_0

Forwarding in sysctl exists:

sudo sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.ip_forward = 1
-- user2820186
kubeadm
kubernetes

2 Answers

2/10/2020

This also happens if your Linux kernel is not configured to do ip4/ip6 transparently. An ip4 address configured when the kube-api listens on an ip6 interface breaks.

-- Jon Watte
Source: StackOverflow

8/2/2018

Nodes can't see each other on port 6443.

It seems like your api server in not runnning.

  • Fact that you have error stating :6443: connect: connection refused is pointing towards your api server not running.
  • This is further confirmed from your list of running docker containers on instances - you are missing api server container. Note that you have related container with "/pause" but you are missing container with "kube-apiserver --...". Your scheduler and controller-manger appear to run correctly, but api server is not.

Now you have to dig in and see what prevented your api server from starting properly. Check kubelet logs on all control-plane nodes.

-- Const
Source: StackOverflow