Currently, there are two eks cluster a prod and dev. I am trying to access the dev cluster which exists in a different aws account and it gives me an error "You must be logged in to the server"
When I try to get the kubectl version I am getting an error. Please point my mistake. This happens only with the dev cluster. Please also let me know the steps to correct if I am wrong anywhere.
AWS_PROFILE=eks_admin_dev kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-07-26T20:40:11Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
error: You must be logged in to the server (the server has asked for the client to provide credentials)
AWS_PROFILE=eks_admin_dev kubectl get pods
error: You must be logged in to the server (Unauthorized)
I have created access key and secret access key for my dev user( which are admin credentials). I created two profiles dev and eks_admin_dev. I understand that the source_profile part is telling it to use the dev profile to do an sts:AssumeRole for the eks-admin role.
$ aws --version
aws-cli/1.16.45 Python/2.7.12 Linux/4.4.0-1066-aws botocore/1.12.35
$ kubectl config current-context
dev
$ cat ~/.aws/config
[default] ---> prod account
region = us-east-1
[profile eks_admin_dev] ---> dev account
role_arn = arn:aws:iam::xxxxxxxx:role/eks-admin
source_profile = dev
region = us-east
[profile dev] ---> dev account
region = us-east-1
my credentials:
$ cat ~/.aws/credentials
[old]
aws_secret_access_key = xxxxxxxxxxxxxx
aws_access_key_id = xxxxxxxxx
[default]
aws_access_key_id = xxxxxx
aws_secret_access_key = xxx
[dev]
aws_secret_access_key = xxx
aws_access_key_id = xxx
[eks_admin_dev]
aws_access_key_id = xx
aws_secret_access_key = xx
cat ~/.kube/kubeconfig
, I tried specifying the role here, same error.
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- dev-0
command: aws-iam-authenticator
env:
- name: AWS_PROFILE
value: eks_admin_dev
This works for me using both the AWS_PROFILE
env on the command line and also setting the env in the ~/.kube/config
file.
The only thing that I can think may be happening is that you have the AWS credentials predefined for your prod account in the bash env already (Those take precedence over what's in ~/.aws/credentials
. You can check with this:
$ env | grep AWS
AWS_SECRET_ACCESS_KEY=xxxxxxxx
AWS_ACCESS_KEY_ID=xxxxxxxxx
If that's the case you can unset them or remove them from whatever init file you may be sourcing on your shell.
$ unset AWS_SECRET_ACCESS_KEY
$ unset AWS_ACCESS_KEY_ID