I am using jenkins on a separate instance and eks for running k8s cluster. Currently I'm using my local machine to deploy any changes to cluster using kubeconfig and aws-iam-authenticator with the help of kubectl. Now I want to deploy any changes to cluster using Jenkins, I have installed Kubernetes, Kubernetes CLI plugin. Below is my kubeconfig file content,
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: My-Cluster-Api-Server-Endpoint
name: Cluster-ARN
- cluster:
certificate-authority-data: DATA+OMITTED
server: My-Cluster-Api-Server-Endpoint
name: kubernetes
contexts:
- context:
cluster: Cluster-ARN
user: Cluster-ARN
name: Cluster-ARN
- context:
cluster: kubernetes
user: aws
name: aws
current-context: Cluster-ARN
kind: Config
preferences: {}
users:
- name: Cluster-ARN
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- cluster-name
command: aws-iam-authenticator
env: null
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- cluster-name
- -r
- arn:EKS-Service-Role-ARN
command: aws-iam-authenticator
env: null
How can I use my kubeconfig file to get access to my K8s cluster. I have added my kubeconfig file as credentials but when I am generating pipeline script code for kubernetes cli plugin the credential dropdown is not showing the added kubeconfig credential.