kubectl config use-context deleting eks user

9/17/2018

I'm encountering some really weird behaviour while attempting to switch contexts using kubectl.

My config file declares two contexts; one points to an in-house cluster, while the other points to an Amazon EKS cluster.

apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority-data: <..>
    server: <..>
  name: in-house
- cluster:
    certificate-authority-data: <..>
    server: <..>
  name: eks
contexts:
- context:
    cluster: in-house
    user: divesh-in-house
  name: in-house-context
- context:
    cluster: eks
    user: divesh-eks
  name: eks-context
current-context: in-house-context
preferences: {}
users:
- name: divesh-eks
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
      - "token"
      - "-i"
      - "eks"
      env: null
- name: divesh-in-house
  user:
    client-certificate-data: <..>
    client-key-data: <..>

I'm also using the aws-iam-authenticator to authenticate to the EKS cluster.

My problem is this - as long as I work with the in-house cluster, everything works fine. But, when I execute kubectl config use-context eks-context, I observe the following behaviour.

  • Any operation I try to perform on the cluster (say, kubectl get pods -n production) shows me a Please enter Username: prompt. I assumed the aws-iam-authenticator should have managed the authentication for me. I can confirm that running the authenticator manually (aws-iam-authenticator token -i eks) works fine for me.
  • Executing kubectl config view omits the divesh-eks user, so the output looks like

    users:
    - name: divesh-eks
      user: {}
  • Switching back to the in-house cluster by xecuting kubectl config use-context in-house-context modifies my config file and deletes the divesh-eks-user, so the config file now contains

    users:
    - name: divesh-eks
      user: {}

My colleagues don't seem to face this problem.

Thoughts?

-- divesh premdeep
amazon-eks
kubectl
kubernetes

1 Answer

9/18/2018

The exec portion of that config was added in 1.10 (https://github.com/kubernetes/kubernetes/pull/59495)

If you use a version of kubectl prior to that version, it will not recognize the exec plugin (resulting in prompts for credentials), and if you use it to make kubeconfig changes, it will drop the exec field when it persists the changes

-- Jordan Liggitt
Source: StackOverflow