I'm following this and am about to ask our IT team to open the hardware firewall port for me:
Control-plane node(s)
Protocol | Direction | Port Range | Purpose | Used By |
---|---|---|---|---|
TCP | Inbound | 6443* | Kubernetes API server | All |
TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
TCP | Inbound | 10250 | kubelet API | Self, Control plane |
TCP | Inbound | 10251 | kube-scheduler | Self |
TCP | Inbound | 10252 | kube-controller-manager | Self |
Worker node(s)
Protocol | Direction | Port Range | Purpose | Used By |
---|---|---|---|---|
TCP | Inbound | 10250 | kubelet API | Self, Control plane |
TCP | Inbound | 30000-32767 | NodePort Services†| All |
Before I ask IT to open the hardware port for me, I checked my local environment which doesn't have a hardware firewall, and I see this:
# netstat -oanltp | grep 10250
tcp6 0 0 :::10250 :::* LISTEN 3914/kubelet off (0.00/0/0)
# netstat -oanltp | grep 10251
# netstat -oanltp | grep 10252
You can see that nothing is listening on 10251
and 10252
. But my kube-scheduler
and kube-controller-manager
are running, and everything looks OK:
kube-system kube-controller-manager-shlava 1/1 Running 0 47h 10.192.244.109
kube-system kube-scheduler-shlava 1/1 Running 0 47h 10.192.244.109
So I wonder: is it normal that nothing is listening on 10251
and 10252
?
The answer is: it depends.
--port
flag--port 0
Last one is most probable as Creating a cluster with kubeadm states it is written for version 1.21
Ports 10251
and 10252
have been replaced in veresion 1.17 (see more here)
Kubeadm: enable the usage of the secure kube-scheduler and kube-controller-manager ports for health checks. For kube-scheduler was 10251, becomes 10259. For kube-controller-manager was 10252, becomes 10257.
Moreover, this functionality is depracted in 1.19 (more here)
Kube-apiserver: the componentstatus API is deprecated. This API provided status of etcd, kube-scheduler, and kube-controller-manager components, but only worked when those components were local to the API server, and when kube-scheduler and kube-controller-manager exposed unsecured health endpoints. Instead of this API, etcd health is included in the kube-apiserver health check and kube-scheduler/kube-controller-manager health checks can be made directly against those components' health endpoints.
It seems some parts of documentation are outdated.