How to make service TCP/UDP ports externally accessible in Kubernetes?

6/4/2018

I have many tenants running on one Kubernetes cluster (on AWS), where every tenant has one Pod that exposes one TCP port (not HTTP) and one UDP port.

  • I don't need load balancing capabilities.
  • The approach should expose an IP address that is externally available with a dedicated port for each tenant
  • I don't want to expose the nodes directly to the internet

I have the following service so far:

apiVersion: v1
kind: Service
metadata:
  name: my-service
  labels:
    app: my-app
spec:
  type: NodePort
  ports:
    - port: 8111
      targetPort: 8111
      protocol: UDP
      name: my-udp
    - port: 8222
      targetPort: 8222
      protocol: TCP
      name: my-tcp
  selector:
    app: my-app

What is the way to go?

-- mitchkman
kubernetes

2 Answers

4/11/2019
  • Deploy a NGINX ingress controller on your AWS cluster
  • Change your service my-service type from NodePort to ClusterIP
  • Edit the configMap tcp-services in the ingress-nginx namespace adding :
data:
  "8222": your-namespace/my-service:8222
  • Same for configMap udp-services :
data:
  "8111": your-namespace/my-service:8111

Now, you can access your application externally using the nginx-controller IP <ip:8222> (TCP) and <ip:8111> (UDP)

-- Nicolas Pepinster
Source: StackOverflow

6/5/2018

The description provided by @ffledgling is what you need.

But I have to mention that if you want to expose ports, you have to use a load balancer or expose nodes to the Internet. For example, you can expose a node to the Internet and allow access only to some necessary ports.

-- Artem Golenyaev
Source: StackOverflow