kubectl cannot connect to cluster; No containers are running

11/20/2019

I have installed K8s using kubeadm on the master node. However, when I try running kubectl cluster-info I get the following response:

The connection to the server <host>:6443 was refused - did you specify the right host or port?

I did make sure that swap is off, KUBECONFIG is set properly, .kube/config is proper, it is listening on port 6443, disabled firewall. The two issues that I did find is there is no cache or http-cache files in the .kube directory, and that there is no containers running when I do docker container ls or docker ps. However, I can see that the images for the containers are available with docker image ls.

When I run systemctl status kubelet I get the following:

kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Wed 2019-XX-XX XX:XX:XX XXX; 1s ago
     Docs: https://kubernetes.io/docs/home/
  Process: 6541 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
 Main PID: 6541 (code=exited, status=255)

Checking Kubelet's logs I find (trimmed):

Started kubelet: The Kubernetes Node Agent.
kubelet.service: Current command vanished from the unit file, execution of the com
Stopping kubelet: The Kubernetes Node Agent...
Stopped kubelet: The Kubernetes Node Agent.
Started kubelet: The Kubernetes Node Agent.
F1120 04:53:12.437733    9430 server.go:196] failed to load Kubelet config file
kubelet.service: Main process exited, code=exited, status=255/n/a
kubelet.service: Failed with result 'exit-code'.
Stopped kubelet: The Kubernetes Node Agent.
Started kubelet: The Kubernetes Node Agent.
Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I1120 04:53:20.229997    9604 server.go:410] Version: v1.16.3
I1120 04:53:20.230143    9604 plugins.go:100] No cloud provider specified.
I1120 04:53:20.230154    9604 server.go:773] Client rotation is on, will bootstrap in background
F1120 04:53:20.230185    9604 server.go:271] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
kubelet.service: Main process exited, code=exited, status=255/n/a
kubelet.service: Failed with result 'exit-code'.
kubelet.service: Service hold-off time over, scheduling restart.
kubelet.service: Scheduled restart job, restart counter is at 7.
Stopped kubelet: The Kubernetes Node Agent.
Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://<host>:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests:
--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
container manager verified user specified cgroup-root exists: []
Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs
[fake topologymanager] NewFakeManager
Creating device plugin manager: true
anager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x1b6c020 0x79a0338 0x1b6ca20 map[] map[] map[] map[] map[] 0xc
[cpumanager] initializing new in-memory state store
anager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{{0 0} 0x79a0338 10000000000 0xc0001810e0 <nil> <nil> <nil> <nil> map[memory:{{104857600 0} {<nil>}  BinarySI}]}
Adding pod path: /etc/kubernetes/manifests
Watching apiserver

Lastly, I get the same error with the wrong cluster certificate, but I get an authentication error when I change the user certification.

I am unsure how to fix this.

-- student
kubernetes

1 Answer

11/20/2019

Your kubelet.service logs give you quite straightforward answer where your issue is located. As you can see after running systemctl status kubelet t was starded with the following parameters passed as environment variables:

 Process: 6541 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

but failed:

Main PID: 6541 (code=exited, status=255)

As you can see further in kubelet.service logs:

F1120 04:53:12.437733    9430 server.go:196] failed to load Kubelet config file
kubelet.service: Main process exited, code=exited, status=255/n/a
kubelet.service: Failed with result 'exit-code'.
Stopped kubelet: The Kubernetes Node Agent.
Started kubelet: The Kubernetes Node Agent.

It couldn't load Kubelet config file. All the subsequent errors seem to be just the aftermath of kubelet not being able to load proper config file:

Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.

Did you check the file /etc/kubernetes/bootstrap-kubelet.conf that it tries to load ?

F1120 04:53:20.230185    9604 server.go:271] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory

What do you have in your kubelet.service systemd unit file /lib/systemd/system/kubelet.service ? Could you attach it to your question ?

If kubelet agent on the master node doesn't start properly, no static pods defined in /etc/kubernetes/manifests can be created and almost all key-components of your cluster ( including kube-apiserver ) are not present.

I did make sure that swap is off, KUBECONFIG is set properly, .kube/config is proper, it is listening on port 6443, disabled firewall.

Did you make sure it is defined to listen on 6443 or it actually listens on port 6443 ? What does ss -ntlp | grep 6443 show ?

Unfortunately it's not possible to say what can be the exact problem in this particular case without having more information about your environment. Did you follow any particular tutorial or official documentation when creating your environment ?

-- mario
Source: StackOverflow