I've been looking at setting up an Ingress controller for a bare-metal Kubernetes cluster. I started looking at Ingress controllers, but these seem to only work well for HTTP services that are reachable via port 80 or 443. If you need to expose a TCP or UDP service on an arbitrary port, it seems possible to do with the Nginx or HAProxy Ingress controllers, but your cluster ends up sharing a single port range. Please let me know if I've misunderstood this.
If you need to expose and load balance TCP or UDP services on arbitrary ports, how would you do it? I was thinking of using ClientIP so that services get their own VIP and can use any ports they want, but the question then becomes, how do you route traffic to those VIPs and give them friendly DNS names? Is there a solution for this already, or do you have to build one yourself? Using NodePort or any solution that means namespaces have to share a single port range isn't really scalable or desirable. Especially if Bob in namespace 1 absolutely needs his service to be reachable on port 8000, but Linda in namespace 2 is already using that port.
Any clarification, potential solutions, or help in general will be much appreciated.
The github issue is an interesting read, and there are some clever workarounds like starting with HTTPS then using ALPN to switch to a custom protocol: https://github.com/kubernetes/kubernetes/issues/23291 but of course then your clients need to know how to do that.
But if the protocols for these TCP and UDP services using the same port are different and don't have a way to interoperate, then the ingress controller needs to be able to allocate the equivalent of a distinct routable IP address- either with the cloud provider, or with the proprietary infrastructure, however that is handled, per exposed service.
I have not looked closely though my sense is that the packaged ingress controllers from nginx and haproxy are not going to have that automation. It would have to be built in coordination with the available infrastructure automation.