Kubernetes Load balancing and Proxy

5/8/2018

I am quite new with Kubernetes and I have a few questions regarding REST API request proxy and load balancing.

I have one Master and two Worker nodes with some of the Services on one Worker node and few on other Worker node.

At a beginning I had just one worker node and I accessed to my pods using Worker node IP and service NodePort. After adding another Worker node to cluster, Kubernetes have "redistributed" mu pods to both of Working nodes.

Now, I can again access to my pods using both Worker node IPs and Service NodePorts. This i a bit confusing to me: how can I reach my pod REST APIs for pods that are not on the worker node which IP address is used?

Also, since I have 2 Worker nodes now, how Load balancing should be done in a proper way over both of Worker nodes? I know that I can set serviceType to LoadBalancer for Service, but is that enough?

Thank you for your answers!

-- branko terzic
kubernetes

1 Answer

5/8/2018

how can I reach my pod REST APIs for pods that are not on the worker node which IP address is used?

  • It is better to think of exposing your services to outer world, rather than pods, and consequently avoid considering IP addresses of nodes that pods are running on. Answer to this question is dependent on your setup. Many configurations are possible depending on actual complexity and speed/availability requirements, but basic setup boils down to:
    • If you are running in some supported cloud environment then setup of load balanced ingress would expose it to outer world without much fuss.
    • If, however, you are running on bare metal, then you have to make your own ingress (simple nginx or apache proxy pod would suffice) and point upstream to your service name (or fqdn in case of another namespace), thus exposing all pods within service regardless of actual nodes they are running on to outer world and leaving load balancing to kubernetes service.

how Load balancing should be done in a proper way over both of Worker nodes?

  • This is a bit more complex topic since in uniform distribution of your pods across the nodes, you can make do with external load balancer that is oblivious of pod distribution. For us, leaving load balancing to kubernetes service proved to be more accurate, since more often than not you can have two pods run on same node (if number of pods is larger than number of nodes) in which case external load balancer will not be able to balance uniformly and kubernetes service layer will be.
-- Const
Source: StackOverflow