Can not connect to eks cluster with kubectl - "user:" and "name:" are the same as the cluster name in config

1/11/2022

I can not perform any kubectl commands. It states that it can not connect to the eks cluster.

I look at my /.kube/config file and notice that my "user:" and "name:" are the same as the cluster name. I assume that is what the problem is.

Note that this is in a Fargate container. I did not set it up, I am just trying to figure out why it can not connect and am fairly new to Kubernetes.

What initially populates those fields? The code does perform a AWS STS assume-role in the container before it tries to use kubectl. Can I get all these fields from the STS assume command?

Any help is greatly appreciated!!

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxxxooooxxxxooooxxxxooooxx
    server: https://xxxxooooxxxxooo.sk1.us-west-1.eks.amazonaws.com
  name: arn:aws:eks:us-west-1:1212121:cluster/np
contexts:
- context:
    cluster: arn:aws:eks:us-west-1: 1212121:cluster/np
    user: arn:aws:eks:us-west-1: 121212156:cluster/np
  name: arn:aws:eks:us-west-1: 1212121:cluster/np
current-context: arn:aws:eks:us-west-1: 1212121:cluster/np
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-west-1: 1212121:cluster/np
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-west-1
      - eks
      - get-token
      - --cluster-name
      - np
      command: aws
-- ErnieAndBert
amazon-eks
kubectl
kubernetes

0 Answers