Why do we need a load balancer to expose kubernetes services using ingress?

4/28/2020

For a sample microservice based architecture deployed on Google kubernetes engine, I need help to validate my understanding :

  1. We know services are supposed to load balance traffic for pod replicaset.
  2. When we create an nginx ingress controller and ingress definitions to route to each service, a loadbalancer is also setup automatically.
  3. had read somewhere that creating nginx ingress controller means an nginx controller (deployment) and a loadbalancer type service getting created behind the scene. I am not sure if this is true.

It seems loadbalancing is being done by services. URL based routing is being done by ingress controller.

Why do we need a loadbalancer? It is not meant to load balance across multiple instances. It will just forward all the traffic to nginx reverse proxy created and it will route requests based on URL.

Please correct if I am wrong in my understanding.

-- inaitgaJ
google-cloud-platform
google-kubernetes-engine
kubernetes
kubernetes-ingress
load-balancing

3 Answers

4/29/2020

It seems loadbalancing is being done by services. URL based routing is being done by ingress controller.

Services do balance the traffic between pods. But they aren't accessible outside the kubernetes in Google Kubernetes Engine by default (ClusterIP type). You can create services with LoadBalancer type, but each service will get its own IP address (Network Load Balancer) so it can get expensive. Also if you have one application that has different services it's much better to use Ingress objects that provides single entry point. When you create an Ingress object, the Ingress controller (e.g. nginx one) creates a Google Cloud HTTP(S) load balancer. An Ingress object, in turn, can be associated with one or more Service objects.

Then you can get the assigned load balancer IP from ingress object:

kubectl get ingress ingress-name --output yaml

As a result your application in pods become accessible outside the kubernetes cluster:

LoadBalancerIP/url1 -> service1 -> pods

LoadBalancerIP/url2 -> service2 -> pods

-- AttaBoy
Source: StackOverflow

4/28/2020

An ingress controller(nginx for example) pods needs to be exposed outside the kubernetes cluster as an entry point of all north-south traffic coming into the kubernetes cluster. One way to do that is via a LoadBalancer. You could use NodePort as well but it's not recommended for production or you could just deploy the ingress controller directly on the host network on a host with a public ip. Having a load balancer also gives ability to load balance the traffic across multiple replicas of ingress controller pods.

When you use ingress controller the traffic comes from the loadBalancer to the ingress controller and then gets to backend POD IPs based on the rules defined in ingress resource. This bypasses the kubernetes service and load balancing(by kube-proxy at layer 4) offered by kubernetes service.Internally the ingress controller discovers all the POD IPs from the kubernetes service's endpoints and directly route traffic to the pods.

-- Arghya Sadhu
Source: StackOverflow

4/29/2020

A Service type LoadBalancer and the Ingress is the way to reach your application externally, although they work in a different way.

Service:

In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a selector (see below for why you might want a Service without a selector).

There are some types of Services, and of them is the LoadBalancer type that permit you to expose your application externally assigning a externa IP for your service. For each LoadBalancer service a new external IP will be assign to it. The load balancing will be handled by kube-proxy.

Ingress:

An API object that manages external access to the services in a cluster, typically HTTP. Ingress may provide load balancing, SSL termination and name-based virtual hosting.

When you setup an ingress (i.e.: nginx-ingress), a Service type LoadBalancer is created for the ingress-controller pods and a Load Balancer in you cloud provider is automatically created and a public IP will be assigned for the nginx-ingress service.

This load balancer/public ip will be used for incoming connection for all your services, and nginx-ingress will be the responsible to handle the incoming connections.

For example:

Supose you have 10 services of LoadBalancer type: This will result in 10 new publics ips created and you need to use the correspondent ip for the service you want to reach.

But if you use a ingress, only 1 IP will be created and the ingress will be the responsible to handle the incoming connection for the correct service based on PATH/URL you defined in the ingress configuration. With ingress you can:

  • Use regex in path to define the service to redirect;
  • Use SSL/TLS
  • Inject custom headers;
  • Redirect requests for a default service if one of the service failed (default-backend);
  • Create whitelists based on IPs
  • Etc...

A important note about Ingress Load balancing in ingress:

GCE/AWS load balancers do not provide weights for their target pools. This was not an issue with the old LB kube-proxy rules which would correctly balance across all endpoints.

With the new functionality, the external traffic is not equally load balanced across pods, but rather equally balanced at the node level (because GCE/AWS and other external LB implementations do not have the ability for specifying the weight per node, they balance equally across all target nodes, disregarding the number of pods on each node).

-- KoopaKiller
Source: StackOverflow