I am trying to understand how kubectl gets permissions to run the commands. I understand that all interactions with kubernetes clusters go through the kube-apiserver. So when we run a kubectl command, say kubectl get pods
from the master node, the request will go via kube-apiserver.
The apiserver does the authentication and authorization and provide the results back. kubectl like any other user or resource should also be associated with a role and rolebinding to acquire the permissions for accessing the resources on the cluster. How can I check to which role and rolebinding is the kubectl associated to ?
Apologies if this is a ridiculous question.
This answer is an extension to the other ones and helps you with scripts when are using client certificates:
If you are using client certificates, your ~/.kube/config
file contains client-certificate-data
for the user of the current context. This data is a base64
encoded certificate which can be displayed in text form with openssel
. The interesting information for your question is in the Subject
section.
This script will print the Subject
line of the client certificate:
$ kubectl config view --raw -o json \
| jq ".users[] | select(.name==\"$(kubectl config current-context)\")" \
| jq -r '.user["client-certificate-data"]' \
| base64 -d | openssl x509 -text | grep "Subject:"
Output on my Mac when running kubernetes via Docker for Mac:
Subject: O=system:masters, CN=docker-for-desktop
O
is the organization and represents a group in kubernetes.
CN
is the common name and is interpreted as user by kubernetes.
Now you know which user and group you are using with kubectl at the moment. To find out which (cluster)rolebinding you are using, you have to look for the identified group/user:
$ group="system:masters"
$ kubectl get clusterrolebindings -o json \
| jq ".items[] | select(.subjects[].name==\"$group\")"
{
"apiVersion": "rbac.authorization.k8s.io/v1",
"kind": "ClusterRoleBinding",
"metadata": {
"annotations": {
"rbac.authorization.kubernetes.io/autoupdate": "true"
},
"creationTimestamp": "2020-03-31T14:12:13Z",
"labels": {
"kubernetes.io/bootstrapping": "rbac-defaults"
},
"name": "cluster-admin",
"resourceVersion": "95",
"selfLink": "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin",
"uid": "878fa48b-cf30-42e0-8e3c-0f27834dfeed"
},
"roleRef": {
"apiGroup": "rbac.authorization.k8s.io",
"kind": "ClusterRole",
"name": "cluster-admin"
},
"subjects": [
{
"apiGroup": "rbac.authorization.k8s.io",
"kind": "Group",
"name": "system:masters"
}
]
}
You can see in the output that this group is associated with the ClusterRole
cluster-admin
. You can take a closer look at this clusterrole to see the permissions in detail:
$ kubectl get clusterrole cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2020-03-31T14:12:12Z"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: "42"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin
uid: 9201f311-4d07-46c3-af36-2bca9ede098f
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
Kubectl is not associated to a role or role-binding.Kubectl uses a file called kubeconfig. That file has either a client certificate or a JWT bearer token.
If a client certificate is presented and verified by the API server, the common name of the subject is used as the user name for the request.
If a JWT bearer token is used then all the data needed to identify the user is in the token itself.
That's how authentication of a user happens in kubernetes.
The user's authorization is defined by Role Based access control(RBAC) rules in form of role and role-binding.
You can use kubectl config view
to see which context is active now. You will see smth like this:
contexts:
- context:
cluster: my-cluster
namespace: stage
user: alice
name: stage-ctx
current-context: stage-ctx
That means that each command goes to stage
namespace (if no namespace is specified in command) of my-cluster
and is authenticated as user alice
there.
Next thing happens on server side. There is probably a role that allows somebody to do get and list pods. let's call it edit-stage-role
:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: stage
name: edit-stage-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["pods"]
verbs: ["get", "list"]
And there is also a binding that basically assigns this role to particular subjects, like groups or users:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: edit-stage-rb
namespace: stage
roleRef:
kind: Role
name: edit-stage-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: User
name: alice
apiGroup: rbac.authorization.k8s.io
- kind: User
name: bob
apiGroup: rbac.authorization.k8s.io
In this example it binds role to bob
and alice
. After request is authenticated and k8s knows that you are user alice
, it tries to authorize the request by evaluating your permissions which are stored in one of the roles binded to alice
.
From high level it looks like this. There can probably be multiple roles, or ClusterRoles
, ClusterRB
and other options but overall concept looks same.
And you, as admin, can impersonate particular user at particular namespace to see if set of roles and bindings works as expected. Use this command:
$ kubectl auth can-i get pods --namespace=stage --as alice
yes
But from end-user's point of view, there is probably just a user that impersonates you in cluster. All other stuff is only visible to admin (or if you have permissions for that)
That's just a brief explanation. You can read more at https://kubernetes.io/docs/reference/access-authn-authz/rbac/