I have two kubernetes clusters that were set up by kops. They are both running v1.10.8
. I have done by best to mirror the configuration between the two. They both have RBAC enabled. I have kubernetes-dashboard running on both. They both have a /srv/kubernetes/known_tokens.csv
with an admin
and a kube
user:
$ sudo cat /srv/kubernetes/known_tokens.csv ABCD,admin,admin,system:masters DEFG,kube,kube (... other users ...)
My question is how do these users get authorized with consideration to RBAC? When authenticating to kubernetes-dashboard using tokens, the admin
user's token works on both clusters and has full access. But the kube
user's token only has access on one of the clusters. On one cluster, I get the following errors in the dashboard.
configmaps is forbidden: User "kube" cannot list configmaps in the namespace "default" persistentvolumeclaims is forbidden: User "kube" cannot list persistentvolumeclaims in the namespace "default" secrets is forbidden: User "kube" cannot list secrets in the namespace "default" services is forbidden: User "kube" cannot list services in the namespace "default" ingresses.extensions is forbidden: User "kube" cannot list ingresses.extensions in the namespace "default" daemonsets.apps is forbidden: User "kube" cannot list daemonsets.apps in the namespace "default" pods is forbidden: User "kube" cannot list pods in the namespace "default" events is forbidden: User "kube" cannot list events in the namespace "default" deployments.apps is forbidden: User "kube" cannot list deployments.apps in the namespace "default" replicasets.apps is forbidden: User "kube" cannot list replicasets.apps in the namespace "default" jobs.batch is forbidden: User "kube" cannot list jobs.batch in the namespace "default" cronjobs.batch is forbidden: User "kube" cannot list cronjobs.batch in the namespace "default" replicationcontrollers is forbidden: User "kube" cannot list replicationcontrollers in the namespace "default" statefulsets.apps is forbidden: User "kube" cannot list statefulsets.apps in the namespace "default"
As per the official docs, "Kubernetes does not have objects which represent normal user accounts".
I can't find anywhere on the working cluster that would give authorization to kube
. Likewise, I can't find anything that would restrict kube
on the other cluster. I've checked all ClusterRoleBinding
resources in the default
and kube-system
namespace. None of these reference the kube
user. So why the discrepancy in access to the dashboard and how can I adjust it?
Some other questions:
serviceAccount
a particular request or token is mapped to?groups
in k8s? The k8s docs mention groups a lot. Even the static token users can be assigned a group such as system:masters which looks like a
role/
clusterrolebut there is no
system:mastersrole in my cluster? What exactly are
groups`? As per Create user group using RBAC API?, it appears groups are simply arbitrary labels that can be defined per user. What's the point of them? Can I map a group to a RBAC serviceAccount?Update
I restarted the working cluster and it no longer works. I get the same authorization errors as the working cluster. Looks like it was some sort of cached access. Sorry for the bogus question. I'm still curious on my follow-up questions but they can be made into separate questions.
Hard to tell without access to the cluster, but my guess is that you have a Role
and a RoleBinding
somewhere for the kube
user on the cluster that works. Not a ClusterRole
with ClusterRoleBinding
.
Something like this:
kind: Role
metadata:
name: my-role
namespace: default
rules:
- apiGroups: [""]
Resources: ["services", "endpoints", "pods"]
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-role-binding
namespace: default
subjects:
- kind: User
name: "kube"
apiGroup: ""
roleRef:
kind: Role
name: my-role
apiGroup: ""
How do I debug authorization issues such as this? The dashboard logs just say this user doesn't have access. Is there somewhere I can see which serviceAccount a particular request or token is mapped to?
You can look at the kube-apiserver logs under /var/log/kube-apiserver.log
on your leader master. Or if it's running in a container docker logs <container-id-of-kube-apiserver>