Kubernetes dashboard not working, “already exists” and “could not find the requested resource (get services heapster)”

9/27/2017

I am new to Kubernetes

The goal is to get Kubernetes cluster dashboard working

The Kubernetes cluster was deployed using Kubespray: github.com/kubernetes-incubator/kubespray

Versions:

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-15T08:51:21Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3+coreos.0", GitCommit:"42de91f04e456f7625941a6c4aaedaa69708be1b", GitTreeState:"clean", BuildDate:"2017-08-07T19:44:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

When I do kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml --validate=false as described here

I get:

Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": secrets "kubernetes-dashboard-certs" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": serviceaccounts "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": rolebindings.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": deployments.extensions "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": services "kubernetes-dashboard" already exists

When I run kubectl get services --namespace kube-system, I get:

NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kube-dns               10.233.0.3      <none>        53/UDP,53/TCP   10d
kubernetes-dashboard   10.233.28.132   <none>        80/TCP          9d

When I try to reach the dashboard kubernetes cluster, I get Connection refused

kubectl logs --namespace=kube-system kubernetes-dashboard-4167803980-1dz53 output:

2017/09/27 10:54:11 Using in-cluster config to connect to apiserver
2017/09/27 10:54:11 Using service account token for csrf signing
2017/09/27 10:54:11 No request provided. Skipping authorization
2017/09/27 10:54:11 Starting overwatch
2017/09/27 10:54:11 Successful initial request to the apiserver, version: v1.7.3+coreos.0
2017/09/27 10:54:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2017/09/27 10:54:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2017/09/27 10:54:11 Initializing secret synchronizer synchronously using secret kubernetes-dashboard-key-holder from namespace kube-system
2017/09/27 10:54:11 Initializing JWE encryption key from synchronized object
2017/09/27 10:54:11 Creating in-cluster Heapster client
2017/09/27 10:54:11 Serving securely on HTTPS port: 8443
2017/09/27 10:54:11 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

Other outputs:

kubectl get pods --namespace=kube-system:

NAME                                    READY     STATUS    RESTARTS   AGE
calico-node-bqckz                       1/1       Running   0          12d
calico-node-r9svd                       1/1       Running   2          12d
calico-node-w3tps                       1/1       Running   0          12d
kube-apiserver-kubetest1                1/1       Running   0          12d
kube-apiserver-kubetest2                1/1       Running   0          12d
kube-controller-manager-kubetest1       1/1       Running   2          12d
kube-controller-manager-kubetest2       1/1       Running   2          12d
kube-dns-3888408129-n0m8d               3/3       Running   0          12d
kube-dns-3888408129-z8xx3               3/3       Running   0          12d
kube-proxy-kubetest1                    1/1       Running   0          12d
kube-proxy-kubetest2                    1/1       Running   0          12d
kube-proxy-kubetest3                    1/1       Running   0          12d
kube-scheduler-kubetest1                1/1       Running   2          12d
kube-scheduler-kubetest2                1/1       Running   2          12d
kubedns-autoscaler-1629318612-sd924     1/1       Running   0          12d
kubernetes-dashboard-4167803980-1dz53   1/1       Running   0          1d
nginx-proxy-kubetest3                   1/1       Running   0          12d

kubectl proxy:

Starting to serve on 127.0.0.1:8001panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x2692f20]

goroutine 1 [running]:
k8s.io/kubernetes/pkg/kubectl.(*ProxyServer).ServeOnListener(0x0, 0x3a95a60, 0xc420114110, 0x17, 0xc4208b7c28)
    /private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/proxy_server.go:201 +0x70
k8s.io/kubernetes/pkg/kubectl/cmd.RunProxy(0x3aa5ec0, 0xc42074e960, 0x3a7f1e0, 0xc42000c018, 0xc4201d7200, 0x0, 0x0)
    /private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/proxy.go:156 +0x774
k8s.io/kubernetes/pkg/kubectl/cmd.NewCmdProxy.func1(0xc4201d7200, 0xc4203586e0, 0x0, 0x2)
    /private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/proxy.go:79 +0x4f
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc4201d7200, 0xc420358500, 0x2, 0x2, 0xc4201d7200, 0xc420358500)
    /private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:603 +0x234
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc4202e4240, 0x5000107, 0x0, 0xffffffffffffffff)
    /private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:689 +0x2fe
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0xc4202e4240, 0xc42074e960, 0x3a7f1a0)
    /private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:648 +0x2b
k8s.io/kubernetes/cmd/kubectl/app.Run(0x0, 0x0)
    /private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/app/kubectl.go:39 +0xd5
main.main()
    /private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:26 +0x22

kubectl top nodes:

Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)

kubectl get svc --namespace=kube-system:

NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kube-dns               10.233.0.3      <none>        53/UDP,53/TCP   12d
kubernetes-dashboard   10.233.28.132   <none>        80/TCP          11d

curl http://localhost:8001/ui: curl: (7) Failed to connect to 10.2.3.211 port 8001: Connection refused

How can I get the dashboard working? Appreciate your help.

-- Ivan
dashboard
kubernetes

2 Answers

9/27/2017

you may be installing dashboard version 1.7. try installing version 1.6.3 its well tested.

kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin  --serviceaccount=kube-system:default
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml

Update 10/2/17: can you try this: Delete and install 1.6.3 version.

kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml 


kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin  --serviceaccount=kube-system:default
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml
-- sfgroups
Source: StackOverflow

9/27/2017

I believe the kubernetes dashboard is by default available already if you are deploying it through GCP or Azure. The first error explains this already. To verify, you may do type the following command to look for the pods/service in the namespace kube-system.

>kubectl get pods --namespace=kube-system 
>kubectl get svc --namespace=kube-system 

From the above command, you should find your available kubernetes dashboard and so you don't need to deploy it again. To access the dashboard, you could type the following command.

>kubectl proxy 

This will make the Dashboard available at http://localhost:8001/ui on the machine where you type this command.

But to understand more about your problem, may I know which version of kubernetes and what environment are you using now? Also, it will be great if you could show me the result of these two commands.

>kubectl get pods --namespace=kube-system 
>kubectl top nodes 
-- Isaac Wong
Source: StackOverflow