How to load balance sockets using ingress nginx

10/10/2019

In kubernetes I have a deployment of 3 pods in charge of the sockets.

I wish to load balance the traffic between the pods of the deployment. To do it, I'm using the NGINX Ingress controller installed via Helm using the chart stable/nginx-ingress.

The problem is that the clients always connect to the same pod. There is no balancing.

To test the load balancing, I'm using sevaral phones using the data (2-6 phones). Each of them opening a socket connection.

I have 2 ingress rules. For the sockets I'm using:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-socket-service
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/websocket-services: "node-socket-service"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"    
    nginx.ingress.kubernetes.io/upstream-hash-by: "$host"
spec:
  tls:
    - hosts:
      - example.com
  rules:
    - host: example.com
      http:
        paths:     
          - path: /socket.io/
            backend:
              servicePort: 4000
              serviceName: node-socket-service

Service:

apiVersion: v1
kind: Service
metadata:
  name: node-socket-service
spec:
  type: ClusterIP
  selector:
    component: node-socket
  ports:
    - port: 4000
      targetPort: 4000

I tried to change the value of upstream-hash-by with : $binary_remote_addr $remote_addr $host ewma $request_uri, unsuccessful...

I'm wondering if the way that I'm doing my test is good. May be the load balancing is working well but it needs to have more clients.

-- LaurentP22
kubernetes
kubernetes-ingress
load-balancing
nginx
socket.io

1 Answer

10/11/2019

I am assuming you are using the following architecture to reach your pod:

Ingress controller ---> kubernetes service ---> kubernetes deployment (POD)

If this is the case, then you are using load balancing with a statistical round robin policy already. For which I would conclude that your deployment only has one replica. Check the amount of replicas by running kubectl describe deployment $YOUR_DEPLOYMENT. Increase the amount of replicas by running kubectl scale deployment --replicas=5.

In case you are using a different architecture I would need to check it in order to verify why load balancing is not working. Most likely you are not using a Deployment bud a Pod to deploy your container.

-- Rodrigo Loza
Source: StackOverflow