We are using kops
to create our own kubernetes
cluster in AWS EC2. We run some special processes on the EC2 instances and would like for them to have access to the kubernetes
node labels, but I cannot find a way to access them from the instance.
How can I access the Kubernetes labels for the node the instance is hosting from the instance layer, outside of any containers, using standard Unix tools like bash
, curl
, and sed
?
Assuming that you are running the process outside Kubernetes and directly on host, the first step would be get correct nodename (Which is same as hostname, but just to be sure) as described in this answer:
$ curl -Gs http://localhost:10255/pods/ | grep -o '"nodeName":"[^"]*"' | head -n 1
"nodeName":"e2e-test-stclair-minion-8o3b"
Then using Kubectl or Kubernetes API - whichever works, get labels of that node. I am assuming you have access to Kubeconfig and are using Kubectl:
kubectl get nodes gke-nodename-pool-87c8b616-549c -ojsonpath='{.metadata.labels}'
If you were running Daemonset - then you can use the downward API to get nodename and then query API to get labels for that domain. To get the nodename as a env variable inside pod:
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
If there is no Kubectl on nodes and if you don't want to run the process as daemonset - then running a small Go lang container as a daemonset - which gets nodename from ENV and then queries the labels from Kubernetes server and exposes on a nodeport which the host process can access seems the closet you can get!