I am trying to access the kubernetes Dashboard using the config file. From the authentication when I select config file its giving ‘Not enough data to create auth info structure
.’ Bu the same config file work for kubectl command.
here is my config file.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://kubemaster:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
Any help to resolve this issue?
Thanks SR
If you want to see dashboard in action before going through a major investment setting up security, here is the way I got things going quickly. I did this with v2.0.0-rc7
:
ClusterRoleRef
that installs with this method needs to be replaced with this one. (You need to delete the existing one first with kubectl delete ...
, then add it.)deployment
to get that set up.Now you can go to the web page and click "skip". Voila! All your keys are exposed with no password. Pray nobody gets ahold of that link!
But wait, you say it's still too hard to get in? If you have a load balancer installed, here's two additional steps:
kubectl -n kubernetes-dashboard edit service kubernetes-dashboard
will allow you to change the service spec to type: LoadBalancer
.kubectl -n kubernetes-dashboard describe service kubernetes-dashboard
will now show you the IP address that it has kindly put your insecure dashboard on.Now you have an insecure port with no password to easily browse your crown jewels. Enjoy!
After looking at this answer How to sign in kubernetes dashboard? and source code figured the kubeconfig authentication.
After kubeadm install on the master server get the default service account token and add it to config file. Then use the config file to authenticate.
You can use this to add the token.
#!/bin/bash
TOKEN=$(kubectl -n kube-system describe secret default| awk '$1=="token:"{print $2}')
kubectl config set-credentials kubernetes-admin --token="${TOKEN}"
your config file should be looking like this.
kubectl config view |cut -c1-50|tail -10
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.ey
Only authentication options specified by --authentication-mode
flag is supported in kubeconfig file.
You can auth with token (any token in kube-system
namespace):
$ kubectl get secrets -n kube-system
$ kubectl get secret $SECRET_NAME -n=kube-system -o json | jq -r '.data["token"]' | base64 -d > user_token.txt
and auth with the token (see user_token.txt file).
Two things are going on here
The usual way to deploy the Dashboard Application is just
kubectl apply
a YAML file pulled from the configuration recommended at the Github project(for the dashboard): /src/deploy/recommended/kubernetes-dashboard.yaml
⟹ master•v1.10.1kubectl proxy
and access the dashbord through the locally mapped Port 8001.However this default configuration is generic and minimal. It just maps a role binding with minimal privileges. And, especially on DigitalOcean, the kubeconfig
file provided when provisioning the cluster lacks the actual token, which is necessary to log into the dashboard.
Thus, to fix these shortcomings, we need to ensure there is an account, which has a RoleBinding to the cluster-admin ClusterRole in the Namespace kube-system. The above mentioned default setup just provides a binding to kubernetes-dashboard-minimal
. We can fix that by deplyoing explicitly
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
And then we also need to get the token for this ServiceAccount...
kubectl get serviceaccount -n kube-system
will list you all service accounts. Check that the one you want/created is presentkubectl get secrets -n kube-system
should list a secret for this accountkubectl describe secret -n kube-system admin-user-token-
XXXXXX you'd get the information about the token.The other answers to this question provide ample hints, how this access could be scripted in a convenient way (like e.g. using awk, using grep, using kubectl get
with -o=json
and piping to jq, or using -o=jsonpath
)
You can then either:
kubeconfig
file and paste in the token to the admin user provided thereIf you want to get past dashboard's authentification prompt and then be able to do admin-things on the dashboard, I recommmend this: https://github.com/kubernetes/dashboard/wiki/Creating-sample-user.
1 - Assuming one has followed the directions to setup the dashboard here. https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html
2 - And your normal kubectl access works from the command line (i.e. kubectl get services).
3 - And you are able to login manually to the Dashboard with the token (with kubectl -n kube-system describe secret ...), by using copy/paste.
4 - But now you want to use the "Kubeconfig" (instead of "Token") option to login to the Dashboard, for simplicity.
Solution:
Here is what it should look like...
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://kubemaster:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: PUT_YOUR_TOKEN_HERE_THAT_YOU_USED_TO_MANUALLY_LOGIN