Minikube running in Docker mode returns 503 when launching the dashboard

4/22/2019

I have started to learn Minikube using some of this tutorial and a bit of this one. My plan is to use the "none" driver to use Docker rather than the standard Virtual Box.

My purpose is to learn some infra/operations techniques that are more flexible than Docker Swarm. There are a few docker run switches that Swarm does not support, so I am looking at alternatives.

When setting this up, I had a couple of false starts, as I did not specify the --vm-driver=none initially, and I had to do a sudo -rf ~/.minikube and/or a sudo minikube delete to not use VirtualBox. (Although I don't think it is relevant, I will mention anyway that I am working inside a VirtualBox Linux Mint VM as a matter of long-standing security preference).

So, I think I have a mostly working installation of Minikube, but something is not right with the dashboard, and since the Hello World tutorial asks me to get that working, I would like to persist with this.

Here is the command and error:

$ sudo minikube dashboard
  Enabling dashboard ...
  Verifying dashboard health ...
  Launching proxy ...
  Verifying proxy health ...
  http://127.0.0.1:41303/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ is not responding properly: Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
{snipped many more of these}

Minikube itself looks OK:

$ sudo minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 10.0.2.15

However it looks like some components have not been able to start, but there is no indication why they are having trouble:

$ sudo kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY   STATUS             RESTARTS   AGE
kube-system   coredns-fb8b8dccf-2br2c                 0/1     CrashLoopBackOff   16         62m
kube-system   coredns-fb8b8dccf-nq4b8                 0/1     CrashLoopBackOff   16         62m
kube-system   etcd-minikube                           1/1     Running            2          60m
kube-system   kube-addon-manager-minikube             1/1     Running            3          61m
kube-system   kube-apiserver-minikube                 1/1     Running            2          61m
kube-system   kube-controller-manager-minikube        1/1     Running            3          61m
kube-system   kube-proxy-dzqsr                        1/1     Running            0          56m
kube-system   kube-scheduler-minikube                 1/1     Running            2          60m
kube-system   kubernetes-dashboard-79dd6bfc48-94c8l   0/1     CrashLoopBackOff   12         40m
kube-system   storage-provisioner                     1/1     Running            3          62m

I am assuming that a zero in the READY column means that something was not able to start.

I have been issuing commands either with or without sudo, so that might be related. Sometimes there are config files in my non-root ~/.minikube folder that are owned by root, and I have been forced to use sudo to progress further.

This seems to look OK:

Kubernetes master is running at https://10.0.2.15:8443
KubeDNS is running at https://10.0.2.15:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Incidentally, I don't really know what these various status commands do, or whether they are relevant - I have found some similar posts here and on GitHub, and their respective authors used these commands to write questions and bug reports.

This API status looks like it is in a pickle, but I don't know whether it is relevant (I found it through semi-random digging):

https://10.0.2.15:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

{

    "kind": "Status",
    "apiVersion": "v1",
    "metadata": { },
    "status": "Failure",
    "message": "services \"kube-dns:dns\" is forbidden: User \"system:anonymous\" cannot get resource \"services/proxy\" in API group \"\" in the namespace \"kube-system\"",
    "reason": "Forbidden",
    "details": {
        "name": "kube-dns:dns",
        "kind": "services"
    },
    "code": 403

}

I also managed to cause a Go crash too, seen in sudo minikube logs:

panic: secrets is forbidden: User "system:serviceaccount:kube-system:default" cannot create resource "secrets" in API group "" in the namespace "kube-system"

goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.(*rsaKeyHolder).init(0xc42011c2e0)
    /home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:131 +0x35e
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.NewRSAKeyHolder(0x1367500, 0xc4200d0120, 0xc4200d0120, 0x1213a6e)
    /home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:170 +0x64
main.initAuthManager(0x13663e0, 0xc420301b00, 0xc4204cdcd8, 0x1)
    /home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:185 +0x12c
main.main()
    /home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:103 +0x26b

I expect that would correspond to the 503 I am getting, which is a server error of some kind.

Some versions:

$ minikube version
minikube version: v1.0.0
$ docker --version
Docker version 18.09.2, build 6247962
$ sudo kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:51:21Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

Related links:

What can I try next to debug this?

-- halfer
docker
kubernetes
minikube

1 Answer

4/22/2019

It looks like I needed the rubber-ducking of this question in order to find an answer. The Go crash was the thing to have researched, and is documented in this bug report.

The commands to create a missing role is:

$ kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/kube-system-cluster-admin created

Then we need to get the name of the system pod for the dashboard:

$ sudo kubectl get pods -n kube-system

Finally, use the ID of the dashboard pod instead of kubernetes-dashboard-5498ccf677-dq2ct:

$ kubectl delete pods -n kube-system  kubernetes-dashboard-5498ccf677-dq2ct
pod "kubernetes-dashboard-5498ccf677-dq2ct" deleted

I think this removes the misconfigured dashboard, leaving a new one to spawn in its place when you issue this command:

sudo minikube dashboard

To my mind, the Go error looks sufficiently naked and unhandled that it needs catching, but then I am not au fait with Go. The bug report has been auto-closed by a CI bot, and several attempts to reopen it seem to have failed.

At a guess, I could have avoided this pain with setting the role config to start with. However, this is not noted in the Hello World tutorial, so it would not be reasonable to expect beginners not to step into this trap:

sudo minikube start --vm-driver=none --extra-config='apiserver.Authorization.Mode=RBAC'
-- halfer
Source: StackOverflow