I'm trying to replicate the kubectl get pods
command in Python3 using the kubernetes python library. Except, I'm working with a remote kubernetes cluster, NOT my localhost. The configuration host is a particular web address.
Here's what I tried:
v1 = kubernetes.client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
As recommended in the documentation. This however defaults to searching my localhost instead of the specific web address. I know I have access to this web address because the following runs totally 100% as expected:
import time
import kubernetes.client
from kubernetes.client.rest import ApiException
from pprint import pprint
configuration = kubernetes.client.Configuration()
# Configure API key authorization: BearerToken
configuration.api_key['authorization'] = 'YOUR_API_KEY'
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
configuration.api_key_prefix['authorization'] = 'Bearer'
# Defining host is optional and default to http://localhost
configuration.host = "THE WEB HOST I'M USING"
# Enter a context with an instance of the API kubernetes.client
with kubernetes.client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = kubernetes.client.AdmissionregistrationApi(api_client)
try:
api_response = api_instance.get_api_group()
pprint(api_response)
except ApiException as e:
print("Exception when calling AdmissionregistrationApi->get_api_group: %s\n" % e)
What do you all think? How do I force it to check the pods of that host getting around the localhost
default?
I know two solutions that may help in your case. I will describe both of them and you may choose which one suits you best.
I recommend setting up a kubeconfig
file which allows you to connect to a remote cluster.
You can find more information on how to configure it in the documentation: Organizing Cluster Access Using kubeconfig Files
If you have a kubeconfig
file configured, you can use the load_kube_config() function to load authentication and cluster information from your kubeconfig
file.
I've created a simple list_pods_1.py
script to illustrate how it may work:
$ cat list_pods_1.py
#!/usr/bin/python3.7
# Script name: list_pods_1.py
import kubernetes.client
from kubernetes import client, config
config.load_kube_config("/root/config") # I'm using file named "config" in the "/root" directory
v1 = kubernetes.client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
$ ./list_pods_1.py
Listing pods with their IPs:
10.32.0.2 kube-system coredns-74ff55c5b-5k28b
10.32.0.3 kube-system coredns-74ff55c5b-pfppk
10.156.15.210 kube-system etcd-kmaster
10.156.15.210 kube-system kube-apiserver-kmaster
10.156.15.210 kube-system kube-controller-manager-kmaster
10.156.15.210 kube-system kube-proxy-gvxhq
10.156.15.211 kube-system kube-proxy-tjxch
10.156.15.210 kube-system kube-scheduler-kmaster
10.156.15.210 kube-system weave-net-6xqlq
10.156.15.211 kube-system weave-net-vjm7j
As described in this example - remote_cluster.py:
Is it possible to communicate with a remote Kubernetes cluster from a server outside of the cluster without kube client installed on it.The communication is secured with the use of Bearer token.
You can see how to create and use the token in the Accessing Clusters documentation.
I've created simple list_pods_2.py
script (
based on the remote_cluster.py script) to illustrate how it may work:
$ cat list_pods_2.py
#!/usr/bin/python3.7
import kubernetes.client
from kubernetes import client, config
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
# Define the barer token we are going to use to authenticate.
# See here to create the token:
# https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/
aToken = "<MY_TOKEN>"
# Create a configuration object
aConfiguration = client.Configuration()
# Specify the endpoint of your Kube cluster
aConfiguration.host = "https://<ENDPOINT_OF_MY_K8S_CLUSTER>"
# Security part.
# In this simple example we are not going to verify the SSL certificate of
# the remote cluster (for simplicity reason)
aConfiguration.verify_ssl = False
# Nevertheless if you want to do it you can with these 2 parameters
# configuration.verify_ssl=True
# ssl_ca_cert is the filepath to the file that contains the certificate.
# configuration.ssl_ca_cert="certificate"
aConfiguration.api_key = {"authorization": "Bearer " + aToken}
# Create a ApiClient with our config
aApiClient = client.ApiClient(aConfiguration)
# Do calls
v1 = client.CoreV1Api(aApiClient)
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
$ ./list_pods_2.py
Listing pods with their IPs:
10.32.0.2 kube-system coredns-74ff55c5b-5k28b
10.32.0.3 kube-system coredns-74ff55c5b-pfppk
10.156.15.210 kube-system etcd-kmaster
10.156.15.210 kube-system kube-apiserver-kmaster
10.156.15.210 kube-system kube-controller-manager-kmaster
10.156.15.210 kube-system kube-proxy-gvxhq
10.156.15.211 kube-system kube-proxy-tjxch
10.156.15.210 kube-system kube-scheduler-kmaster
10.156.15.210 kube-system weave-net-6xqlq
10.156.15.211 kube-system weave-net-vjm7j
NOTE: As an example, I am using a token for the default service account (you will probably want to use a different ServiceAcccount
), but to work it properly this ServiceAccount
needs appropriate permissions.
For example, you may add a view
role to your ServiceAccount
like this:
$ kubectl create clusterrolebinding --serviceaccount=default:default --clusterrole=view default-sa-view-access
clusterrolebinding.rbac.authorization.k8s.io/default-sa-view-access created