Kubernete: ask a L4 load balance solution for a non-cloud cluster

2/21/2017

I built a Kubernete cluster in local data center. There are 4 nodes and 1 master. Looking for a L4 loadbalance solution for internal service.

root@niubi01:/home/mesos# kubectl get nodes
NAME      STATUS         AGE
niubi01   Ready,master   7d
niubi02   Ready          7d
niubi03   Ready          7d
niubi04   Ready          7d
niubi05   Ready          7d

Assume we have three Pods with 'hello world' web service. A service with exposed External IP is created with 'NodePort'. The external IP is 'Nodes' and port is 30145.

root@niubi01:/home/mesos# kubectl get service
NAME               CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
example-service    10.100.220.21    <nodes>       8080:30145/TCP   6d

As the document mentioned, we can access any node IP to access this 'hello world' service. Like:

curl http://niubi01:30145
curl http://niubi02:30145
curl http://niubi03:30145
curl http://niubi04:30145
curl http://niubi05:30145

from outside. The problem is we can't guarantee any node is active forever, even master. Which URL should we take to use? How to do a LoadBalance like Haproxy to provide high availability to this service? Should we have another server provide loadbalance sevice between these 5 addresses? Seeking for any better solution for this case.

-- Mian
kubernetes
load-balancing

2 Answers

2/21/2017

independent from where your LoadBalancer is located you could just have a virtual IP address load balanced between your nodes and include it in your service definition as shown in the documentation:

---
kind: Service
apiVersion: v1
metadata:
  name: my-service
spec:
  selector:
    app: MyApp
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 9376
  externalIPs:
  - 80.11.12.10

Once the traffic for this IP (80.11.12.10) hits any of the nodes, kube-proxy will redirect it to your service.

One option to implement this would be using Pacemaker on the nodes as described in many blog posts. But also having a dedicated load balancer in front of the cluster would work just fine.

The benefit of using virtual IPs is that you don't have to mess with the NodePorts in firewalls or related configuration. Another benefit is that this isn't limited to HTTP traffic.

The downside is that the configuration of the external load balancer and the IP assignment for the service is not automated and has to be done manually. To mitigate this issue you could either implement your own provider (see other provider implementations on Github) or you could read the service configuration from etcd and use that as source for the configuration of your external load balancer.

-- pagid
Source: StackOverflow

2/21/2017

As you already have noticed, you would have to setup a custom loadbalancer to make this work. This loadbalancer must be external to your cluster and configured by yourself.

I would suggest that you read through the concepts of Ingress and ingress-controller. Especially the nginx-ingress-controller is very useful here.

The advantage would be that you would only have to set up your custom external loadbalancer only once and not for all services you'd like to expose. Your loadbalancer should balance traffic to the ingress controller, which will then do the internal load balancing based on the provided Ingress resources.

To deploy the ingress controller, it should be enough to do the following:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/deployment/nginx/default-backend.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/deployment/nginx/nginx-ingress-controller.yaml

The first line creates a default-backend which is used for all unmatched ingresses. It basically just returns 404

The second line creates a Deployment with 1 replica by default. In a prod environment, you may want to change the replica count either by scaling the Deployment or by using a local modified copy of the nginx-ingress-controller.yaml file. Also, I'd advise to use dedicated nodes (by using DaemonSet+NodeAffinity+Taints+Tolerations) for the ingress-controller in case you expect a lot of traffic.

The ingress-controller now runs without being exposed. I assume that exposing the controller is not part of the examples as this varies too much depending on the infrastructure in use. In your case, you should create a Kubernetes Service that exposes the ingress-controller as NodePort by deploying this resource:

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingres-controller-svc
  labels:
    name: nginx-ingres-controller-svc
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30080
      name: http
    - port: 443
      nodePort: 30443
      name: https
  selector:
    k8s-app: nginx-ingress-controller

Please note that the nodePort is explicitly specified here. This makes life easier when you configure your external loadbalancer.

After all this is set-up, you can create Ingress resources to direct external traffic into your internal services. For example:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: example-ingress
spec:
  rules:
  - host: example.company.org
    http:
      paths:
      - path: /
        backend:
          serviceName: example-service
          servicePort: 8080

If you have your data centers DNS setup to resolve example.company.org to the external load balancer, calling it will bring you directly to the example-service

All this probably sounds more complicated then just using a NodePort and changing configuration of the external loadbalancer for new services. But if it is set-up once, configuration and automation are simplified a lot. It also gives a ton of new features which would have to be implemented manually otherwise. For example, the nginx-ingress-controller natively supports basic auth by simply adding an annotation to the Ingress resource. It also supports letsencrypt when used in combination with kube-lego. As said in the beginning, you should read the documentation regarding ingress to figure out what it brings for free.

-- Alexander Block
Source: StackOverflow