I want to connect to and call Kubernetes REST APIs from inside a running pod, the Kubernetes in question is an AWS EKS cluster using IAM authentication. All of this using Kubernetes Python lib.
From inside my python file
:
from kubernetes import client, config
config.load_incluster_config()
v1 = client.CoreV1Api()
ret = v1.list_pod_for_all_namespaces(watch=False)
The above command throws a 403
error, This I believe is due to the different auth mechanism that AWS EKS uses.
ApiToken = 'eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.xxx.yyy'
configuration = client.Configuration()
configuration.host = 'https://abc.sk1.us-east-1.eks.amazonaws.com'
configuration.verify_ssl = False
configuration.debug = True
configuration.api_key = {"authorization": "Bearer " + ApiToken}
client.Configuration.set_default(configuration)
While the above works, I have to hardcode a token that I generate locally via kubectl and check it into the code which is a security risk.
Is there a more proper way to authenticate the Kubernetes python lib with AWS EKS?
You can use the following method to get the token. This assumes that you have successfully installed and configured aws-iam-authenticator on your pod/server/laptop.
def get_token(cluster_name):
args = ("/usr/local/bin/aws-iam-authenticator", "token", "-i", cluster_name, "--token-only")
popen = subprocess.Popen(args, stdout=subprocess.PIPE)
popen.wait()
return popen.stdout.read().rstrip()
api_token = get_token("<cluster_name>")
configuration = client.Configuration()
configuration.host = '<api_endpoint>'
configuration.verify_ssl = False
configuration.debug = True
configuration.api_key['authorization'] = "Bearer " + api_token
configuration.assert_hostname = True
configuration.verify_ssl = False
client.Configuration.set_default(configuration)
v1 = client.CoreV1Api()
ret = v1.list_pod_for_all_namespaces(watch=False)
print ret
There is an PR for kubernetes-client/python-base that adds support for exec plugins, Attempt to implement exec-plugins support in kubeconfig.