I'm trying to setup AWS IAM Authenticator for my k8s cluster. I have two AWS account: A and B.
The k8s account runs in the B account.
I have created in the A account the following resources:
Policy
Description: Grants permissions to assume the kubernetes-admin role
Policy:
Statement:
- Action: sts:*
Effect: Allow
Resource: arn:aws:iam::<AccountID-B>:role/kubernetes-admin
Sid: KubernetesAdmin
Version: 2012-10-17
The policy is associated to a group and I add my IAM user to the group.
in the B account I have created the following role:
AssumeRolePolicyDocument:
Statement:
- Action: sts:AssumeRole
Effect: Allow
Principal:
AWS: arn:aws:iam::<AccountID-A>:root
Version: 2012-10-17
This is the ConfigMap
to configure aws-iam-authenticator:
apiVersion: v1
data:
config.yaml: |
# a unique-per-cluster identifier to prevent replay attacks
# (good choices are a random token or a domain name that will be unique to your cluster)
clusterID: k8s.mycluster.net
server:
# each mapRoles entry maps an IAM role to a username and set of groups
# Each username and group can optionally contain template parameters:
# "{{AccountID}}" is the 12 digit AWS ID.
# "{{SessionName}}" is the role session name.
mapRoles:
- roleARN: arn:aws:iam::<AccountID-B>:role/kubernetes-admin
username: kubernetes-admin:{{AccountID}}:{{SessionName}}
groups:
- system:masters
kind: ConfigMap
metadata:
creationTimestamp: 2018-12-13T19:41:39Z
labels:
k8s-app: aws-iam-authenticator
name: aws-iam-authenticator
namespace: kube-system
resourceVersion: "87401"
selfLink: /api/v1/namespaces/kube-system/configmaps/aws-iam-authenticator
uid: 1bc39653-ff0f-11e8-a580-02b4590539ba
The kubeconfig is:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <certificate>
server: https://api.k8s.mycluster.net
name: k8s.mycluster.net
contexts:
- context:
cluster: k8s.mycluster.net
namespace: kube-system
user: k8s.mycluster.net
name: k8s.mycluster.net
current-context: k8s.mycluster.net
kind: Config
preferences: {}
users:
- name: k8s.mycluster.net
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
env:
- name: "AWS_PROFILE"
value: "myaccount"
args:
- "token"
- "-i"
- "k8s.mycluster.net"
- "-r"
- "arn:aws:iam::<AccountID-B>:role/kubernetes-admin"
The result is:
could not get token: AccessDenied: Access denied
status code: 403, request id: 6ceac161-ff2f-11e8-b263-2b0e32831969
Unable to connect to the server: getting token: exec: exit status 1
Any idea? I don't get what i'm missing.
to add to this - my solution was to do the following:
in ~/.kube directory:
aws eks update-kubeconfig --name eks-dev-cluster --role-arn=XXXXXXXXXXXX
this creates a file config-my-eks-cluster
vi config-my-eks-cluster
comment out the two lines mentioned above:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- eks-dev-cluster
#- -r
#- arn:aws:iam::XXXXXXXXX:role/eks-dev-role (the role you made for eks)
command: aws-iam-authenticator
Then make sure you export your user profile with:
export AWS_PROFILE=XXXXXXXXX (the user you used to create the cluster in the console or through the cli)
The run:
kubectl get svc --v=10
this will put output into verbose mode and give you details on any errors that creep up.
the way to make it work properly is to remove
- "-r"
- "arn:aws:iam::<AccountID-B>:role/kubernetes-admin"
and pass the role to assume to the AWS_PROFILE
env var