Only the creator user can manage AWS kubernetes cluster (EKS) from kubectl?

3/22/2019

We have two clusters, named:

  1. MyCluster (created by me)
  2. OtherCluster (not created by me)

Where "me" is my own AWS IAM user.

I am able to manage the cluster I created, using kubectl:

>>> aws eks update-kubeconfig --name MyCluster –profile MyUser
>>> kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   172.20.0.1   <none>        443/TCP   59d

But, I cannot manage the “OtherCluster” cluster (that was not created by me):

>>> aws eks update-kubeconfig --name OtherCluster --profile MyUser
>>> kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
error: the server doesn't have a resource type "svc"

After reading the feedback of some people experiencing the same issue in this github issue, I tried doing this under the context of the user who originally created the "OtherCluster".

I accomplished this by editing “~/.kube/config”, adding a “AWS_PROFILE” value at “users.user.env”. The profile represents the user who created the cluster.

~/.kube/config:


users
- name: OtherCluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - OtherCluster
      command: aws-iam-authenticator
      env:
      - name: AWS_PROFILE
        value: OTHER_USER_PROFILE

This worked:

# ~/.kube/config is currently pointing to OtherCluster

>>> kubectl get svc 
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   172.20.0.1   <none>        443/TCP   1d

It is obviously not ideal for me to impersonate another person when I am managing the cluster. I would prefer to grant my own user access to manage the cluster via kubectl. Is there any way I can grant permission to manage the cluster to a user other than the original creator? This seems overly restrictive

-- James Wierzba
amazon-web-services
eks
kubernetes

2 Answers

3/23/2019

When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl.

To grant additional AWS users the ability to interact with your cluster, you must edit the aws-auth ConfigMap within Kubernetes, adding a new mapUsers entry for your ConfigMap. This EKS doc covers all the process.

To add an IAM user: add the user details to the mapUsers section of the ConfigMap, under data. Add this section if it does not already exist in the file. Each entry supports the following parameters:

  • userarn: The ARN of the IAM user to add.
  • username: The user name within Kubernetes to map to the IAM user. By default, the user name is the ARN of the IAM user.
  • groups: A list of groups within Kubernetes to which the user is mapped to. For more information, see Default Roles and Role Bindings
    in the Kubernetes documentation.

Example:

apiVersion: v1
data:
  mapRoles: |
    - rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
  mapUsers: |
    - userarn: arn:aws:iam::555555555555:user/my-new-admin-user
      username: my-new-admin-user
      groups:
        - system:masters
-- Eduardo Baitello
Source: StackOverflow

4/2/2019

Re-configuring kubectl for EKS, using the AWS auth profile for the new user, seemed to do the trick.

aws eks update-kubeconfig --name ${CLUSTER_NAME} --profile ${OTHER_USER}

Where ${OTHER_USER} is the new user I am trying to grant access to the EKS cluster, and who is not the user that originally created the cluter.

I can't explain why this step worked for me now, but did not work for me earlier when I posted this question. But hopefully this helps someone else.

-- James Wierzba
Source: StackOverflow