So I have a basic setup of kubernetes dashboard as per the official instructions. It works perfectly with cluser-admin role serviceaccount token. But when I create another serviceaccount with it's own ClusterRole and CluserRoleBinding, I cannot log in to the dashboard with the "Authentication failed. Please try again." message.
Here are the steps I take.
1 kubectl create serviceaccount dashboard-reader -n kube-system
2
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: dashboard-reader
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["get", "watch", "list"]
EOF
3
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dashboard-reader
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: dashboard-reader
subjects:
- kind: ServiceAccount
name: dashboard-reader
namespace: kube-system
EOF
Then I take the token from dashboard-reader-xyz secret and apply it on Dashboard login page. What I try to achieve is to have separate tokens with various permission for say administrators can log in to dashboard with one token and have full permissions, developers may log in with different token and can only see the resources, etc.
The dashboard version is 1.10.1. Kubernetes version is 1.13.5
Its possible to create service-account in k8s and restrict it to specific namespace.
Follow these steps:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: mynamespace-user
namespace: mynamespace
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: mynamespace-user-full-access
namespace: mynamespace
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: mynamespace-user-view
namespace: mynamespace
subjects:
- kind: ServiceAccount
name: mynamespace-user
namespace: mynamespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: mynamespace-user-full-access
Replace mynamespace
with the name of the namespace to which you want to restrict developers.
kubectl -n mynamespace describe secret $(kubectl -n flow get secret | grep mynamespace-user | awk '{print $1}')
apiVersion: v1
kind: Config
preferences: {}
# Define the cluster
clusters:
- cluster:
certificate-authority-data: PLACE CERTIFICATE HERE
# You'll need the API endpoint of your Cluster here:
server: https://YOUR_KUBERNETES_API_ENDPOINT
name: my-cluster
# Define the user
users:
- name: mynamespace-user
user:
as-user-extra: {}
client-key-data: PLACE CERTIFICATE HERE
token: PLACE USER TOKEN HERE
# Define the context: linking a user to a cluster
contexts:
- context:
cluster: my-cluster
namespace: mynamespace
user: mynamespace-user
name: mynamespace
# Define current context
current-context: mynamespace
kubectl -n mynamespace get secret $(kubectl -n flow get secret | grep mynamespace-user | awk '{print $1}') -o "jsonpath={.data['ca\.crt']}"
I have tried these steps in my environment and it works perfectly.
Refer this for more info.
Hope this helps.