Exposing multiple TCP/UDP services using a single LoadBalancer on K8s

4/25/2020

Trying to figure out how to expose multiple TCP/UDP services using a single LoadBalancer on Kubernetes. Let's say the services are ftpsrv1.com and ftpsrv2.com each serving at port 21.

Here are the options that I can think of and their limitations :

  • One LB per svc: too expensive.
  • Nodeport : Want to use a port outside the 30000-32767 range.
  • K8s Ingress : does not support TCP or UDP services as of now.
  • Using Nginx Ingress controller : which again will be one on one mapping:
  • Found this custom implementation : But it doesn't seem to updated, last update was almost an year ago.

Any inputs will be greatly appreciated.

-- Ali
kubernetes
kubernetes-ingress

2 Answers

4/25/2020

In regards to "Nodeport : Want to use a port outside the 30000-32767 range."

You can manually select the port for your service, per service implementation, via the "nodePort" setting in the service's yaml file, or set the flag indicated below so your custom port-range is allocated automatically for all service implementations.

From the docs: "If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767)." services

-- Perryn Gordon
Source: StackOverflow

4/27/2020

It's actually possible to do it using NGINX Ingress.

Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <namespace/service name>:<service port>:[PROXY]:[PROXY].

This guide is describing how it can be achieved using minikube but doing this on a on-premises kubernetes is different and requires a few more steps.

There is lack of documentation describing how it can be done on a non-minikube system and that's why I decided to go through all the steps here. This guides assume you have a fresh cluster with no NGINX Ingress installed.

I'm using a GKE cluster and all commands are running from my Linux Workstation. It can be done on a Bare Metal K8S Cluster also.

Create sample application and service

Here we are going to create and application and it's service to expose it later using our ingress.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-deployment
  namespace: default
  labels:
    app: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - image: redis
        imagePullPolicy: Always
        name: redis
        ports:
        - containerPort: 6379
          protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: redis-service
  namespace: default
spec:
  selector:
    app: redis
  type: ClusterIP
  ports:
    - name: tcp-port
      port: 6379
      targetPort: 6379
      protocol: TCP
---      
apiVersion: v1
kind: Service
metadata:
  name: redis-service2
  namespace: default
spec:
  selector:
    app: redis
  type: ClusterIP
  ports:
    - name: tcp-port
      port: 6380
      targetPort: 6379
      protocol: TCP      

Notice that we are creating 2 different services for the same application. This is only to work as a proof of concept. I wan't to show latter that many ports can be mapped using only one Ingress.

Installing NGINX Ingress using Helm:

Install helm 3:

$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

Add NGINX Ingress repo:

$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

Install NGINX Ingress on kube-system namespace:

$ helm install -n kube-system ingress-nginx ingress-nginx/ingress-nginx

Preparing our new NGINX Ingress Controller Deployment

We have to add the following lines under spec.template.spec.containers.args:

        - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
        - --udp-services-configmap=$(POD_NAMESPACE)/udp-services

So we have to edit using the following command:

$ kubectl edit deployments -n kube-system ingress-nginx-controller

And make it look like this:

...
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --publish-service=kube-system/ingress-nginx-controller
        - --election-id=ingress-controller-leader
        - --ingress-class=nginx
        - --configmap=kube-system/ingress-nginx-controller
        - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
        - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
...

Create tcp/udp services Config Maps

apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: kube-system
apiVersion: v1
kind: ConfigMap
metadata:
  name: udp-services
  namespace: kube-system

Since these configmaps are centralized and may contain configurations, it is best if we only patch them rather than completely overwrite them every time you add a service:

$ kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6379":"default/redis-service:6379"}}'
$ kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6380":"default/redis-service2:6380"}}'

Where:

  • 6379 : the port your service should listen to from outside the minikube virtual machine
  • default : the namespace that your service is installed in
  • redis-service : the name of the service

We can verify that our resource was patched with the following command:

$ kubectl get configmap tcp-services -n kube-system -o yaml

apiVersion: v1
data:
  "6379": default/redis-service:6379
  "6380": default/redis-service2:6380
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"name":"tcp-services","namespace":"kube-system"}}
  creationTimestamp: "2020-04-27T14:40:41Z"
  name: tcp-services
  namespace: kube-system
  resourceVersion: "7437"
  selfLink: /api/v1/namespaces/kube-system/configmaps/tcp-services
  uid: 11b01605-8895-11ea-b40b-42010a9a0050

The only value you need to validate is that there is a value under the data property that looks like this:

  "6379": default/redis-service:6379
  "6380": default/redis-service2:6380

Add ports to NGINX Ingress Controller Deployment

We need to patch our nginx ingress controller so that it is listening on ports 6379/6380 and can route traffic to your service.

spec:
  template:
    spec:
      containers:
      - name: controller
        ports:
         - containerPort: 6379
           hostPort: 6379
         - containerPort: 6380
           hostPort: 6380 

Create a file called nginx-ingress-controller-patch.yaml and paste the contents above.

Next apply the changes with the following command:

$ kubectl patch deployment ingress-nginx-controller -n kube-system --patch "$(cat nginx-ingress-controller-patch.yaml)"

Add ports to NGINX Ingress Controller Service

Differently from the solution presented for minikube, we have to patch our NGINX Ingress Controller Service as it is the responsible for exposing these ports.

spec:
  ports:
  - nodePort: 31100
    port: 6379
    name: redis
  - nodePort: 31101
    port: 6380
    name: redis2

Create a file called nginx-ingress-svc-controller-patch.yaml and paste the contents above.

Next apply the changes with the following command:

$ kubectl patch service ingress-nginx-controller -n kube-system --patch "$(cat nginx-ingress-controller-patch.yaml)"

Check our service

$ kubectl get service -n kube-system ingress-nginx-controller
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                                                    AGE
ingress-nginx-controller   LoadBalancer   10.15.251.203   34.89.108.48   6379:31100/TCP,6380:31101/TCP,80:30752/TCP,443:30268/TCP   38m

Notice that our ingress-nginx-controller is listening to ports 6379/6380.

Test that you can reach your service with telnet via the following command:

$ telnet 34.89.108.48 6379

You should see the following output:

Trying 34.89.108.48...
Connected to 34.89.108.48.
Escape character is '^]'.

To exit telnet enter the Ctrl key and ] at the same time. Then type quit and press enter.

We can also test port 6380:

$ telnet 34.89.108.48 6380
Trying 34.89.108.48...
Connected to 34.89.108.48.
Escape character is '^]'.

If you were not able to connect please review your steps above.

Related articles

-- mWatney
Source: StackOverflow