After I deployed the webui (k8s dashboard), I logined to the dashboard but nothing found there, instead a list of errors in notification.
tatefulsets.apps is forbidden: User "system:serviceaccount:kubernetes-dashboard:default" cannot list resource "statefulsets" in API group "apps" in the namespace "default" 2 minutes ago
error
replicationcontrollers is forbidden: User "system:serviceaccount:kubernetes-dashboard:default" cannot list resource "replicationcontrollers" in API group "" in the namespace "default" 2 minutes ago
error
replicasets.apps is forbidden: User "system:serviceaccount:kubernetes-dashboard:default" cannot list resource "replicasets" in API group "apps" in the namespace "default" 2 minutes ago
error
deployments.apps is forbidden: User "system:serviceaccount:kubernetes-dashboard:default" cannot list resource "deployments" in API group "apps" in the namespace "default" 2 minutes ago
error
jobs.batch is forbidden: User "system:serviceaccount:kubernetes-dashboard:default" cannot list resource "jobs" in API group "batch" in the namespace "default" 2 minutes ago
error
events is forbidden: User "system:serviceaccount:kubernetes-dashboard:default" cannot list resource "events" in API group "" in the namespace "default" 2 minutes ago
error
pods is forbidden: User "system:serviceaccount:kubernetes-dashboard:default" cannot list resource "pods" in API group "" in the namespace "default" 2 minutes ago
error
daemonsets.apps is forbidden: User "system:serviceaccount:kubernetes-dashboard:default" cannot list resource "daemonsets" in API group "apps" in the namespace "default" 2 minutes ago
error
cronjobs.batch is forbidden: User "system:serviceaccount:kubernetes-dashboard:default" cannot list resource "cronjobs" in API group "batch" in the namespace "default" 2 minutes ago
error
namespaces is forbidden: User "system:serviceaccount:kubernetes-dashboard:default" cannot list resource "namespaces" in API group "" at the cluster scope
Here is all my pods
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-58497c65d5-828dm 1/1 Running 0 64m 10.244.192.193 master-node1 <none> <none>
kube-system calico-node-dblzp 1/1 Running 0 17m 157.245.57.140 cluster3-node1 <none> <none>
kube-system calico-node-dwdvh 1/1 Running 1 49m 157.245.57.139 cluster2-node2 <none> <none>
kube-system calico-node-gskr2 1/1 Running 0 17m 157.245.57.133 cluster1-node2 <none> <none>
kube-system calico-node-jm5rd 1/1 Running 0 17m 157.245.57.144 cluster4-node2 <none> <none>
kube-system calico-node-m8htd 1/1 Running 0 17m 157.245.57.141 cluster3-node2 <none> <none>
kube-system calico-node-n7d44 1/1 Running 0 64m 157.245.57.146 master-node1 <none> <none>
kube-system calico-node-wblpr 1/1 Running 0 17m 157.245.57.135 cluster2-node1 <none> <none>
kube-system calico-node-wbrzf 1/1 Running 1 29m 157.245.57.136 cluster1-node1 <none> <none>
kube-system calico-node-wqwkj 1/1 Running 0 17m 157.245.57.142 cluster4-node1 <none> <none>
kube-system coredns-78fcd69978-cnzxv 1/1 Running 0 64m 10.244.192.194 master-node1 <none> <none>
kube-system coredns-78fcd69978-f4ln8 1/1 Running 0 64m 10.244.192.195 master-node1 <none> <none>
kube-system etcd-master-node1 1/1 Running 1 64m 157.245.57.146 master-node1 <none> <none>
kube-system kube-apiserver-master-node1 1/1 Running 1 64m 157.245.57.146 master-node1 <none> <none>
kube-system kube-controller-manager-master-node1 1/1 Running 1 64m 157.245.57.146 master-node1 <none> <none>
kube-system kube-proxy-2b5bz 1/1 Running 0 17m 157.245.57.144 cluster4-node2 <none> <none>
kube-system kube-proxy-cslwc 1/1 Running 3 49m 157.245.57.139 cluster2-node2 <none> <none>
kube-system kube-proxy-hlvxc 1/1 Running 0 17m 157.245.57.140 cluster3-node1 <none> <none>
kube-system kube-proxy-kkdqn 1/1 Running 0 17m 157.245.57.142 cluster4-node1 <none> <none>
kube-system kube-proxy-sm7nq 1/1 Running 0 17m 157.245.57.133 cluster1-node2 <none> <none>
kube-system kube-proxy-wm42s 1/1 Running 0 64m 157.245.57.146 master-node1 <none> <none>
kube-system kube-proxy-wslxd 1/1 Running 0 17m 157.245.57.141 cluster3-node2 <none> <none>
kube-system kube-proxy-xnh24 1/1 Running 0 17m 157.245.57.135 cluster2-node1 <none> <none>
kube-system kube-proxy-zvsqf 1/1 Running 1 29m 157.245.57.136 cluster1-node1 <none> <none>
kube-system kube-scheduler-master-node1 1/1 Running 1 64m 157.245.57.146 master-node1 <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-856586f554-c4thn 1/1 Running 0 14m 10.244.14.65 cluster2-node2 <none> <none>
kubernetes-dashboard kubernetes-dashboard-67484c44f6-hwvj5 1/1 Running 0 14m 10.244.213.65 cluster1-node1 <none> <none>
Here is all my nodes:
NAME STATUS ROLES AGE VERSION
cluster1-node1 Ready <none> 29m v1.22.1
cluster1-node2 Ready <none> 17m v1.22.1
cluster2-node1 Ready <none> 17m v1.22.1
cluster2-node2 Ready <none> 49m v1.22.1
cluster3-node1 Ready <none> 17m v1.22.1
cluster3-node2 Ready <none> 17m v1.22.1
cluster4-node1 Ready <none> 17m v1.22.1
cluster4-node2 Ready <none> 17m v1.22.1
master-node1 Ready control-plane,master 65m v1.22.1
I suspect there is misconfiguration in kubernetes-dashboard namespace, so it cannot access the system.
If you have applied the proper ClusterRoleBinding for your kubernetes-dashboard and still have the forbidden message, please take a look at the token you are using for accessing the dashboard.
In kubectl get serviceaccount kubernetes-dashboard -o yaml
look for .secrets.name. That's the token you need to use to login
Then kubectl get secret <the token name> -o jsonpath='{.data.token}' | base64 -d
. Copy all the token. Notice you should not copy the last % character.
I have recreated the situation according to the attached tutorial and it works for me. Make sure, that you are trying properly login:
To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you can follow our guide on creating a sample user.
Warning: The sample user created in the tutorial will have administrative privileges and is for educational purposes only.
You can also create admin role
:
kubectl create clusterrolebinding serviceaccounts-cluster-admin \
--clusterrole=cluster-admin \
--group=system:serviceaccounts
However, you need to know that this is potentially a very dangerous solution as you are granting root permissions to create pods for every user who has read secrets. You should use this method only for learning and demonstrating purpose.
You can read more about this solution here and more about RBAC authorization.
See also this question.