I am running Kubernetes on bare metal and use Kubernets dashboard to manage the cluster. This functions fine at first, but after 5-30 minutes when I try to access the dashboard at: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
I get the following error:
Error: 'dial tcp 10.35.0.19:8443: connect: no route to host' Trying to reach: 'https://10.35.0.19:8443/'
All pods in kube-system
are up and running if I check them with kubectl get pods -n kube-system
:
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-87pfc 1/1 Running 0 1m
coredns-86c58d9df4-tflg5 1/1 Running 0 1m
etcd-controller01 1/1 Running 5 1m
etcd-controller02 1/1 Running 6 1m
heapster-798ffb9b4-744q4 1/1 Running 0 1m
kube-apiserver-controller01 1/1 Running 1 1m
kube-apiserver-controller02 1/1 Running 3 1m
kube-controller-manager-controller01 1/1 Running 5 1m
kube-controller-manager-controller02 1/1 Running 2 1m
kube-proxy-8qqnq 1/1 Running 0 1m
kube-proxy-9vgck 1/1 Running 0 1m
kube-proxy-dht69 1/1 Running 0 1m
kube-proxy-f7bx8 1/1 Running 0 1m
kube-proxy-jnxtq 1/1 Running 0 1m
kube-proxy-l5h7m 1/1 Running 0 1m
kube-proxy-p9gt5 1/1 Running 0 1m
kube-proxy-zv4sr 1/1 Running 0 1m
kube-scheduler-controller01 1/1 Running 3 1m
kube-scheduler-controller02 1/1 Running 4 1m
kubernetes-dashboard-57df4db6b-px8xc 1/1 Running 0 1m
metrics-server-55d46868d4-s9j5v 1/1 Running 0 1m
monitoring-grafana-564f579fd4-fm6lm 1/1 Running 0 1m
monitoring-influxdb-8b7d57f5c-llgz9 1/1 Running 0 1m
weave-net-2b2dm 2/2 Running 1 1m
weave-net-988rf 2/2 Running 0 1m
weave-net-hcm5n 2/2 Running 0 1m
weave-net-kb2gk 2/2 Running 0 1m
weave-net-ksvbf 2/2 Running 0 1m
weave-net-q9zlw 2/2 Running 0 1m
weave-net-t9f6m 2/2 Running 0 1m
weave-net-vdspp 2/2 Running 0 1m
When I restart all pods in this namespace with kubectl delete pods --all -n kube-system
the dashboard sometimes works again for 5-30 minutes and at other times it randomly starts working again out of itself. I have tried restarting each pod in this namespace individually to try and track down which pod is causing this issue but restarting the pods one by one does not get the dashboard up again. Only the delete all at once command works.
Does anybody have an idea why this happens and how I can fix this? Thank you in advance!