Kubernetes dashboard in CrashLoopbackOff: Error while initializing connection to Kubernetes apiserve

6/10/2018

I have a fresh install of kubernetes on a 3 node cluster (ubuntu 16.04, VirtualBox), using kubadm:

  kubeadm version: &version.Info{Major:"1", Minor:"10",      GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z",  GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

I installed the kubernetes dashboard using the standard yaml definition:

  kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

However I see the pod keeps crashing:

 kube-proxy-6bzmx                        1/1       Running            2          1d
 kube-proxy-9jp98                        1/1       Running            2          1d
 kube-proxy-bppbp                        1/1       Running            0          1d
 kube-scheduler-kubemaster               1/1       Running            2          1d 
 kubernetes-dashboard-7d5dcdb6d9-9snln   0/1       CrashLoopBackOff   1          1m

I've modified the apiserver-url line a follows:

--apiserver-host=https://127.0.0.1:6443

I can curl a response from the api server url successfully:

root@kubemaster:~/dashboard# curl -k https://192.168.99.20:6443/version
{
 "major": "1",
"minor": "10",
"gitVersion": "v1.10.3",
"gitCommit": "2bba0127d85d5a46ab4b778548be28623b32d0b0",
 "gitTreeState": "clean",
"buildDate": "2018-05-21T09:05:37Z",
 "goVersion": "go1.9.3",
"compiler": "gc",
"platform": "linux/amd64"
}

And:

 root@kubemaster:~/dashboard# curl -k  https://127.0.0.1:6443/version
 {
 "major": "1",
 "minor": "10",
 "gitVersion": "v1.10.3",
 "gitCommit": "2bba0127d85d5a46ab4b778548be28623b32d0b0",
 "gitTreeState": "clean",
 "buildDate": "2018-05-21T09:05:37Z",
 "goVersion": "go1.9.3",
 "compiler": "gc",
 "platform": "linux/amd64"
}

This works even from within a container launched via docker on the master:

 root@79e42d97e37d:/# curl -k https://192.168.99.20:6443/version
 {
 "major": "1",
  "minor": "10",
  "gitVersion": "v1.10.3",
  "gitCommit": "2bba0127d85d5a46ab4b778548be28623b32d0b0",
  "gitTreeState": "clean",
  "buildDate": "2018-05-21T09:05:37Z",
  "goVersion": "go1.9.3",
  "compiler": "gc",
  "platform": "linux/amd64" 

As well as from one of the slave nodes:

 root@kubenode1:~#  curl -k https://192.168.99.20:6443/version
 { 
  "major": "1",
  "minor": "10",
  "gitVersion": "v1.10.3",
  "gitCommit": "2bba0127d85d5a46ab4b778548be28623b32d0b0",
  "gitTreeState": "clean",
 "buildDate": "2018-05-21T09:05:37Z",
 "goVersion": "go1.9.3",
 "compiler": "gc",
 "platform": "linux/amd64"
 }

However, redeploying still results in the pod ending up in CrashloopBackup, with the same error:

Stopping the nodes to force dashboard deployment on the master just results in the pod remaining in pending state forever:

Every 2.0s: kubectl get po -n kube-system                                                                                                Tue Jun  5 05:04:56 2018

NAME                                    READY     STATUS    RESTARTS   AGE 
etcd-kubemaster                         1/1       Running   8          1d
kube-apiserver-kubemaster               1/1       Running   9          1d
kube-controller-manager-kubemaster      1/1       Running   8          1d
kube-dns-86f4d74b45-kf8mr               3/3       Running   21         1d
kube-flannel-ds-5cl8l                   1/1       Running   5          1d
kube-flannel-ds-8fgk6                   1/1       Running   1          1d
kube-flannel-ds-hmzdb                   1/1       Running   9          1d
kube-proxy-6bzmx                        1/1       Running   8          1d
kube-proxy-9jp98                        1/1       Running   3          1d
kube-proxy-bppbp                        1/1       Running   1          1d
kube-scheduler-kubemaster               1/1       Running   9          1d
kubernetes-dashboard-7f86dc5d9c-sdtb5   0/1       Pending   0          4m

However, I see from the kube-proxy logs, kube-proxy also cannot reach the API ("dial tcp 192.168.99.20:6443: getsockopt: connection refused"):

E0609 20:49:25.816749       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://192.168.99.20:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.20:6443: getsockopt: connection refused

Things I've tried so far:

1) disable ipv6 on linux master 2) stop all nodes and ensure the dashboard only deploys on the master 3) change the API url from http to https

Is this a known issue? Alternatively, how can I get a running dashboard :-)

Thanks in advance for any help!

-- Traiano Welcome
docker
kubernetes

1 Answer

6/24/2018

kubernetes dashboard has to run on the master node.

It turned out I had to make absolutely sure it would now be deployed on any o fthe other 2 nodes in my cluster, in this case simply shutting down the kube service on each of those nodes.

A better solution would simply be to add a constraint restricting deployment of the dashboard to master node.

-- Traiano Welcome
Source: StackOverflow