Forward all TCP and UDP ports from load balancer to nginx ingress on Azure Kubernetes Service

11/27/2019

I am trying to implement a TCP/UDP gateway using kubernetes and I want to dynamically open and close a lot of ports.

Here is the detailed process:

  • We have a running container (containerA) that accepts incoming TCP connection on port 8080
  • We have a load balancer with ip 1.1.1.1, port 9091 is pointed to nginx ingress
  • Nginx Ingress will manage the connection between loadbalancer and containerA using TCP configmap
  • Loadbalancer 1.1.1.1:9091 -> nginx tcp stream 9091 -> backend containerA port 8080
  • When a new client comes, we will provision a new container (containerB) but with same port 8080
  • We will add a new port to the load balancer (port 9092)
  • Loadbalancer 1.1.1.1:9092 -> nginx tcp stream 9092 -> backend containerB port 8080
  • Repeat adding ports for new clients

The nginx ingress configmap for TCP connections looks like this:

apiVersion: v1
data:
  "9091": default/php-apache1:8080
  "9092": default/php-apache2:8080
  "9093": default/php-apache3:8080
  "9094": default/php-apache4:8080
kind: ConfigMap

Excerpt from Nginx ingress deployment yaml:

        ports:
        - containerPort: 9091
          hostPort: 9091
          name: 9091-tcp
          protocol: TCP
        - containerPort: 9092
          hostPort: 9092
          name: 9092-tcp
          protocol: TCP

I was able to open specific TCP/UDP ports and everything works fine but right now I have 2 dilemmas:

  • Adding all the ports one by one on the yaml file is inefficient and hard to manage
  • Adding a new port (ex TCP/9091) by modifying the deployment yaml file causes the existing pods to restart. this behavior is undesirable when new ports are added every now and then

Based on my observation, when adding a new port to the nginx tcp configmap, the changes are reloaded successfully and ports are opened without needing a restart. The problem is, the ports are not yet routed properly unless you modify and add the port to the deployment yaml, which in turn causes the pod to restart.

My question is

  1. Is it possible to add the routing rules only so that the nginx pod doesn't have to restart?

  2. Is it possible to route all ports coming from the load balancer directly to NGINX ingress under Azure Kubernetes Service

  3. Other suggestions for my use case

-- Aries
azure
kubernetes
nginx

1 Answer

11/27/2019

Unless I'm reading this wrong the question (essentially) is: is it possible to edit deployment without restarting the pod?

The answer is no. If you need to edit the deployment - it will restart the pods.

But I dont see where the problem is, they are not all being restarted at the same time. there should be no performance degradation

-- 4c74356b41
Source: StackOverflow