I created an EC2 instance and an EKS cluster in the same AWS account. In order to use the EKS cluster from EC2, I have to grant necessary permissions to it.
I added an instance profile role with some EKS operation permissions. Its role arn is arn:aws:iam::11111111:role/ec2-instance-profile-role
(A) on dashboard. But in the EC2 instance, it can be found as arn:aws:sts::11111111:assumed-role/ec2-instance-profile-role/i-00000000
(B).
$ aws sts get-caller-identity
{
"Account": "11111111",
"UserId": "AAAAAAAAAAAAAAA:i-000000000000",
"Arn": "arn:aws:sts::11111111:assumed-role/ec2-instance-profile-role/i-00000000"
}
I also created an aws-auth
config map to set into Kubernetes' system config in EKS, in order to allow the EC2 instance profile role can be registered and accessible. I tried both A and B to set into the mapRoles, all of them got the same issue. When I run kubectl
command on EC2:
$ aws eks --region aws-region update-kubeconfig --name eks-cluster-name
$ kubectl config view --minify
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://xxxxxxxxxxxxxxxxxxxxxxxxxxxx.aw1.aws-region.eks.amazonaws.com
name: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name
contexts:
- context:
cluster: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name
user: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name
name: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name
current-context: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name
kind: Config
preferences: {}
users:
- name: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- aws-region
- eks
- get-token
- --cluster-name
- eks-cluster-name
- --role
- arn:aws:sts::11111111:assumed-role/ec2-instance-profile-role/i-00000000
command: aws
env: null
provideClusterInfo: false
$kubectl get svc
error: You must be logged in to the server (Unauthorized)
I also checked the type of the assumed role. It's Service
but not AWS
.
It seems this type is necessary.
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": { "AWS": "arn:aws:iam:: 333333333333:root" },
"Action": "sts:AssumeRole"
}
}
https://stackoverflow.com/questions/59704676/terraform-aws-assume-role/59705497#59705497
But I tried to create a new assume role with AWS
type and set it to Kubernetes' aws-auth
config map, still the same issue.
How to use it? Do I need to create a new IAM user to use?
- name: external-staging
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- exec
- test-dev
- --
- aws
- eks
- get-token
- --cluster-name
- eksCluster-1234
- --role-arn
- arn:aws:iam::3456789002:role/eks-cluster-admin-role-e65f32f
command: aws-vault
env: null
this config file working for me. it should be role-arn
& command: aws-vault