I still have a question about Kubernetes NodePort service.
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting :.
If I have two Node: nodeA and nodeB, and if I deploy an app only on nodeA then create a NodePort service, then can I use both nodeA and nodeB ips to access this service?
I had some tests, the result is no....
I did two kinds of tests:
Test1:
I deployed a deployment with two pods, one is on NodeA, another is on NodeB. then create a NodePort service to access it. I can access the service by using both NodeA and NodeB ips. then I delete pod on NodeA, then try again. I found that I cannot access the service by using NodeA ip, but can access by using NodeB ip. After the pod is startup on NodeA, then I can access by using NodeA ip
Test2:
I deployed a deployment with only one pod, it is on NodeA. then create a NodePort service to access it. I can only access the service by using NodeA ip but cannot by NodeB ip.
So my question is:
For the NodePort type, it can only be used that the pod/pods are on the same Node? If I use NodeA ip, service won't load balance the request to the pod on NodeB?
Thanks a lot! :)
If I have two Node: nodeA and nodeB, and if I deploy an app only on nodeA then create a NodePort service, then can I use both nodeA and nodeB ips to access this service?
I had some tests, the result is no....
In that case, it sounds very much like one of three things is not happening: you do not have kube-proxy
running on all the Nodes, or the Nodes are firewalled off from one another in a very restrictive way, or you are not using a software-defined network (such as flannel, calico, etc).
That NodePort behavior is, to the best of my knowledge, implemented using iptables
applied to all the machines, causing traffic received on port X of machine A to be effectively NAT-ed to one of the machine(s) where the actual Pods are running, back to machine A, back to the requester. It is kube-proxy
's job to install the initial iptables
rules for doing that, and then subsequently keep them up to date as the Pods come and go from the cluster. One can observe the correct behavior by running iptables -L -n -t nat
on a Node that is running kube-proxy
, and observe the rules named for the various kubernetes services. They even helpfully include comments in the iptables
rules, which I thought was nice
The firewalling case I think speaks for itself
I actually have never run kubernetes without a software-defined network, so I am not in a good place to offer troubleshooting steps (aside from: install flannel or calico and rejoice in their awesomeness). Perhaps others will be able to weigh in, if that is in fact your circumstance