What policy should be assigned to an iam user if I want to make it work with aws EKS

10/17/2019

Following the official document https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html, I have added aws-auth setting with ConfigMap

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  mapRoles: |
    - rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
  mapUsers: |
    - userarn: arn:aws:iam::555555555555:user/admin
      username: admin
      groups:
        - system:masters
    - userarn: arn:aws:iam::111122223333:user/ops-user
      username: ops-user
      groups:
        - system:masters

But I currently didn't assign any policies to user ops-user.

[ops-user]
region                = ap-southeast-2
aws_secret_access_key = xxxx
aws_access_key_id     = xxxx

After switch to aws profile, I can see the user detail

$ export AWS_PROFILE=ops-user

$ aws sts get-caller-identity
{
    "UserId": "AIDAJLD7JDWRXORLFXWYO",
    "Account": "123456789012",
    "Arn": "arn:aws:iam::123456789012:user/ops-user"
}

But when I try to manage the EKS cluster pod, svc, etc, I got below error

$ kubectl get pods
error: You must be logged in to the server (Unauthorized)

So what policy should be assigned to this iam user if I want to make it work only in EKS kubernetes cluster.

I don't want the user to manage other aws resources.

By the way, I can do all the management with an iam user with administors policy

-- Bill
amazon-eks
amazon-iam
kubernetes

1 Answer

11/5/2019

I faced the same issue recently, and it took a lot of time to overcome this.

Just creating the user doesn’t give that user access to any resources in the cluster. In order to achieve that, we’ll need to define a role, and then bind the user to that role.

Create a file access.yaml (We’re going to create the user (service account), a role, and attach that role to that user.)

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: eks-project-admin-user
  namespace: project-ns

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: eks-project-admin-user-full-access
  namespace: project-ns
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]
  verbs: ["*"]
- apiGroups: ["batch"]
  resources:
  - jobs
  - cronjobs
  verbs: ["*"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: eks-project-admin-user-view
  namespace: project-ns
subjects:
- kind: ServiceAccount
  name: eks-project-admin-user
  namespace: project-ns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: eks-project-admin-user-full-access

Note: Make sure you change the username from "eks-project-admin-user" to yours.

You can use this site for a similar role binding for your user (based on your use case).

Now, let’s create all of this:

kubectl create -f access.yaml

So what policy should be assigned to this iam user if I want to make it work only in EKS kubernetes cluster. For this question, try the below policy JSON (as per AWS documentation).

{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Effect": "Allow",
 "Action": [
 "eks:*"
 ],
 "Resource": "*"
 }
 ]
}

Note: In resource field, you can restrict to a specific cluster also.

Now verify the changes using the command

kubectl get pods -n rbac-test

Hopefully you shouldn't be getting the Unauthorized error now.

-- Arutsudar Arut
Source: StackOverflow