From the master node in a Kubernetes cluster, I can run kubectl get nodes
and see the status of any individual node on the cluster, as kubectl
can find the cluster cert for authentication. On my local workstation, assuming I have auth configured correctly, I can do the same.
From the nodes that are joined to the Kubernetes master, is there any way, short of configuring auth so that kubectl
works, that I can identify if the node is in Ready
or Not Ready
state?
I'm trying to build some monitoring tools which reside on the nodes themselves, and I'd like to avoid having to set up service accounts and the like just to check on the node status, in case there's some way I can identify it via kubelet, logs, a file somewhere on the node, a command, etc...
There's no canonical way of doing this, one option is to use kubelet API.
The kubelet exposes an API which the controlplane talks to in order to make it run pods. By default, this runs on port 10250 but this is a write API and needs to be authenticated.
However, the kubelet also has a flag --read-only-port
which by default is on port 10255. You can use this to check if the kubelet is ready by hitting the healthz endpoint.
curl http://<ip>:10255/healthz
ok
This healthz endpoint is also available on localhost:
curl http://localhost:10248/healthz
If this isn't sufficient, you could possibly check the for a running pod to be available by hitting the pods API:
curl http://<ip>:10255/pods