I followed the manual to install kubernetes dashboard.
Step 1:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
serviceaccount "kubernetes-dashboard" created
service "kubernetes-dashboard" created
secret "kubernetes-dashboard-certs" created
secret "kubernetes-dashboard-csrf" created
secret "kubernetes-dashboard-key-holder" created
configmap "kubernetes-dashboard-settings" created
role.rbac.authorization.k8s.io "kubernetes-dashboard" created
clusterrole.rbac.authorization.k8s.io "kubernetes-dashboard" created
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
deployment.apps "kubernetes-dashboard" created
service "dashboard-metrics-scraper" created
The Deployment "dashboard-metrics-scraper" is invalid: spec.template.annotations.seccomp.security.alpha.kubernetes.io/pod: Invalid value: "runtime/default": must be a valid seccomp profile
Step 2:
kubectl proxy --port=6001 & disown
The output is -
Starting to serve on 127.0.0.1:6001
Now when I'm accessing the site -
http://localhost:6001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
it gives the following error -
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}
Also checking pods do not show kubernetes dashboard
kubectl get pod --namespace=kube-system
shows
NAME READY STATUS RESTARTS AGE
etcd-docker-for-desktop 1/1 Running 0 13d
kube-apiserver-docker-for-desktop 1/1 Running 0 13d
kube-controller-manager-docker-for-desktop 1/1 Running 0 13d
kube-scheduler-docker-for-desktop 1/1 Running 0 13d.
.
kubectl get pod --namespace=kubernetes-dashboard
returns-
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-659f6797cf-8v45l 0/1 CrashLoopBackOff 15 1h
How to fix the problem ?
Update: The following link http://localhost:6001/api/v1/namespaces/kubernetes-dashboard/services gives below output -
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services",
"resourceVersion": "254593"
},
"items": [
{
"metadata": {
"name": "dashboard-metrics-scraper",
"namespace": "kubernetes-dashboard",
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/dashboard-metrics-scraper",
"uid": "932dc2d5-4675-11ea-952a-025000000001",
"resourceVersion": "202570",
"creationTimestamp": "2020-02-03T11:08:58Z",
"labels": {
"k8s-app": "dashboard-metrics-scraper"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"dashboard-metrics-scraper\"},\"name\":\"dashboard-metrics-scraper\",\"namespace\":\"kubernetes-dashboard\"},\"spec\":{\"ports\":[{\"port\":8000,\"targetPort\":8000}],\"selector\":{\"k8s-app\":\"dashboard-metrics-scraper\"}}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 8000,
"targetPort": 8000
}
],
"selector": {
"k8s-app": "dashboard-metrics-scraper"
},
"clusterIP": "10.106.158.177",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
},
{
"metadata": {
"name": "kubernetes-dashboard",
"namespace": "kubernetes-dashboard",
"selfLink": "/api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard",
"uid": "931a96eb-4675-11ea-952a-025000000001",
"resourceVersion": "202558",
"creationTimestamp": "2020-02-03T11:08:58Z",
"labels": {
"k8s-app": "kubernetes-dashboard"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"kubernetes-dashboard\"},\"name\":\"kubernetes-dashboard\",\"namespace\":\"kubernetes-dashboard\"},\"spec\":{\"ports\":[{\"port\":443,\"targetPort\":8443}],\"selector\":{\"k8s-app\":\"kubernetes-dashboard\"}}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 443,
"targetPort": 8443
}
],
"selector": {
"k8s-app": "kubernetes-dashboard"
},
"clusterIP": "10.108.57.147",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
]
}
Working dashboard application should list below resources in running sate
$ kubectl get all -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-76585494d8-c6n5x 1/1 Running 0 136m
pod/kubernetes-dashboard-5996555fd8-wmc44 1/1 Running 0 136m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.109.217.134 <none> 8000/TCP 136m
service/kubernetes-dashboard ClusterIP 10.108.201.245 <none> 443/TCP 136m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 136m
deployment.apps/kubernetes-dashboard 1/1 1 1 136m
NAME DESIRED CURRENT READY AGE
replicaset.apps/dashboard-metrics-scraper-76585494d8 1 1 1 136m
replicaset.apps/kubernetes-dashboard-5996555fd8 1 1 1 136m
Run describe command on failed pod and verify the events listed to find issue
Example:
$ kubectl describe -n kubernetes-dashboard pod kubernetes-dashboard-5996555fd8-wmc44
Events: <none>