I am trying to understand where do these username field is mapped to in the Kubernetes cluster.
This is a sample configmap:
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::111122223333:role/eksctl-my-cluster-nodegroup
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::111122223333:user/admin
username: admin
groups:
- system:masters
- userarn: arn:aws:iam::444455556666:user/ops-user
username: ops-user
groups:
- eks-console-dashboard-full-access-group
If I change the username from system:node:{{EC2PrivateDNSName}}
to something like mynode:{{EC2PrivateDNSName}}
does it really make any difference? Does It make any sense to the k8's cluster by adding the system:
prefix ?.
And where can I see these users in k8's. Can I query it using kubectl
just like k get pods
, as kubectl get usernames
. Is it a dummy user name we are providing to map with or does it hold any special privileges.
From where do these names {{EC2PrivateDNSName}}
comes from. Are there any other variables available? I can't see any information related to this from the documentation.
Thanks in advance!
Posting the answer as a community wiki, feel free to edit and expand.
As you can read in documentation, system:node
require to have prefix system
. If you delete system
, it won't work correctly:
system:node
Allows access to resources required by the kubelet, including read access to all secrets, and write access to all pod status objects. You should use the Node authorizer and NodeRestriction admission plugin instead of the system:node role, and allow granting API access to kubelets based on the Pods scheduled to run on them. The system:node role only exists for compatibility with Kubernetes clusters upgraded from versions prior to v1.8.
You can view RBAC users using external plugin example RBAC Lookup and use a command:rbac-lookup
RBAC Lookup is a CLI that allows you to easily find Kubernetes roles and cluster roles bound to any user, service account, or group name. Binaries are generated with goreleaser for each release for simple installation.
Names will come from your AWS IAM. You can read more about it here:
Access to your cluster using AWS IAM entities is enabled by the AWS IAM Authenticator for Kubernetes which runs on the Amazon EKS control plane. The authenticator gets its configuration information from the
aws-auth
ConfigMap
. For allaws-auth
ConfigMap
settings.