Cluster install with `kubeadm` on Ubuntu. Failing on `kubeadm init` with `getsockopt: connection refused`

1/21/2018

I'm running Ubuntu 16.0.4.

I am in the process of starting a Kubernetes cluster using kubeadm.

I am currently initializing the master node with kubeadm init.

I get the following error:

[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhos t:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.

I have swap turned off, and I installed docker with apt-get install docker.io after removing a previous installation of docker-ce with apt-get remove docker-ce && apt-get autoremove --purge docker-ce.

I am aware some people have had problems if they ever even installed docker-ce and they've started with fresh environments (See https://github.com/kubernetes/kubernetes/issues/55281). I really want to avoid having to do that.

How can I get beyond this error?

Thanks.


Update. Running journalctl -u kubelet -f yielded:

error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

This led me to believe I needed

cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=cgroupfs"]
}
EOF

I restarted docker with systemctl restart docker

And then retried with kubeadm reset && kubeadm init

That worked and solved my problem.

I would like to point out that the kubeadm instructions seem to tell you currently to do something completely different.

They tell you to do:

cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

That just led me down the wrong path.

I didn't see the note after that:

Or ensure the --cgroup-driver kubelet flag is set to the same value as Docker (e.g. cgroupfs).

Thanks!

`

-- David West
kubeadm
kubernetes

2 Answers

1/23/2018

My system needed

cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=cgroupfs"]
}
EOF
-- David West
Source: StackOverflow

1/22/2018

Looking at the error i think you should definitely see the logs as suggested by @David. Now your setup is unable to hit kubelet which runs on 127.0.0.1:10255.Here port 10255 suggests that.

So i think your kubelet is not properly installed or configured. I would suggest you to restart kubelet by hitting command "service kubelet restart" and then running "kubeadm reset" to clean the corrupted setup and finally "kubeadm init". One more point i want add here though it's not related to your question but you should add "--pod-network-cidr=" with kubeadm init. As many network plugins ask for this.

-- Abhay Dwivedi
Source: StackOverflow