Can kubectl work from an assumed role from AWS

1/18/2019

I'm using Amazon EKS for Kubernetes deployment (initially created by an AWS admin user), and currently having difficulty to use the AWS credentials from AWS STS assume-role to execute kubectl commands to interact with the stack

I have 2 EKS stacks on 2 different AWS accounts (PROD & NONPROD), and I'm trying to get the CI/CD tool to deploy to both kubernetes stacks with the credentials provided by AWS STS assume-role but I'm constantly getting error such as error: You must be logged in to the server (the server has asked for the client to provide credentials).

I have followed the following link to add additional AWS IAM role to the config:

But I'm not sure what I'm not doing right.

I ran "aws eks update-kubeconfig" to update the local .kube/config file, contents populated as below:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: [hidden]
    server: https://[hidden].eu-west-1.eks.amazonaws.com
  name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
contexts:
- context:
    cluster: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
    user: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
  name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
current-context: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - triage-eks
      command: aws-iam-authenticator

and had previously updated Kubernetes aws-auth ConfigMap with an additional role as below:

data:
  mapRoles: |
    - rolearn: arn:aws:iam::[hidden]:role/ci_deployer
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:masters

My CI/CD EC2 instance can assume the ci_deployer role for either AWS accounts.

Expected: I can call "kubectl version" to see both Client and Server versions

Actual: but I get "the server has asked for the client to provide credentials"

What is still missing?

After further testing, I can confirm kubectl will only work from an environment (e.g. my CI EC2 instance with an AWS instance role) of the same AWS account where the EKS stack is created. This means that my CI instance from account A will not be able to communicate with EKS from account B, even if the CI instance can assume a role from account B, and the account B role is included in the aws-auth of the kube config of account B EKS. I hope its due to missing configuration as I find this rather undesirable if a CI tool can't deploy to multiple EKS's from multiple AWS accounts using role assumption.

Look forward to further @Kubernetes support on this

-- Raymond Ng
amazon-eks
amazon-iam
kubectl
kubernetes

2 Answers

1/22/2019

From Step 1: Create Your Amazon Cluster

When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl.

As you have discovered you can only access the cluster with the same user/role that created the EKS cluster in the first place.

There is a way to add additional roles to the cluster after creation by editing the aws-auth ConfigMap that has been created.

Add User Role

By editing the aws-auth ConfigMap you can add different levels of access based on the role of the user.

First you MUST have the "system:node:{{EC2PrivateDNSName}}" user

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: <ARN of instance role (not instance profile)>
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

This is required for Kubernetes to even work, giving the nodes the ability to join the cluster. The "ARN of instance role" is the role that includes the required policies AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly etc.

Below that add your role

   - rolearn: arn:aws:iam::[hidden]:role/ci_deployer
     username: ci-deployer
     groups:
       - system:masters

The 'username' can actually be set to about anything. It appears to only be important if there are custom roles and bindings added to your EKS cluster.

Also, use the command 'aws sts get-caller-identity' to validate the environment/shell and the AWS credentials are properly configured. When correctly configured 'get-caller-identity' should return the same role ARN specified in aws-auth.

-- grbonk
Source: StackOverflow

1/18/2019

Can kubectl work from an assumed role from AWS

Yes, it can work. A good way to troubleshoot it is to run from the same command line where you are running kubectl:

$ aws sts get-caller-identity

You can see the Arn for the role (or user) and then make sure there's a trust relationship in IAM between that and the role that you specify here in your kubeconfig:

command: aws-iam-authenticator
args:
   - "token"
   - "-i"
   - "<cluster-name>"
   - "-r"
   - "<role-you-want-to-assume-arn>"

or with the newer option:

command: aws
args:
- eks
- get-token
- --cluster-name
- <cluster-name>
- --role
- <role-you-want-to-assume-arn>

Note that if you are using aws eks update-kubeconfig you can pass in the --role-arn flag to generate the above in your kubeconfig.

In your case, some things that you can look at:

  • The credential environment variables are not set in your CI?:

    AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY
    
  • Your ~/.aws/credentials file is not populated correctly in your CI. With something like this:

    [default]
    aws_access_key_id = xxxx
    aws_secret_access_key = xxxx
  • Generally, the environment variables take precedence so it could be that you could have different credentials altogether in those environment variables too.

  • It could also be the AWS_PROFILE env variable or the AWS_PROFILE config in ~/.kube/config

    users:
    - name: aws
      user:
        exec:
          apiVersion: client.authentication.k8s.io/v1alpha1
          command: aws-iam-authenticator
          args:
            - "token"
            - "-i"
            - "<cluster-name>"
            - "-r"
            - "<role-arn>"
          env:
            - name: AWS_PROFILE <== is this value set
              value: "<aws-profile>"
  • Is the profile set correctly under ~/.aws/config?

-- Rico
Source: StackOverflow