How do I expose non-HTTP, TCP services in Kubernetes?

10/11/2019

I'm running a Kubernetes cluster in a public cloud (Azure/AWS/Google Cloud), and I have some non-HTTP services I'd like to expose for users.

For HTTP services, I'd typically use an Ingress resource to expose that service publicly through an addressable DNS entry.

For non-HTTP, TCP-based services (e.g, a database such as PostgreSQL) how should I expose these for public consumption?

I considered using NodePort services, but this requires the nodes themselves to be publicly accessible (relying on kube-proxy to route to the appropriate node). I'd prefer to avoid this if possible.

LoadBalancer services seem like another option, though I don't want to create a dedicated cloud load balancer for each TCP service I want to expose.

I'm aware that the NGINX Ingress controller supports exposing TCP and UDP services, but that seems to require a static definition of the services you'd like to expose. For my use case, these services are being dynamically created and destroyed, so it's not possible to define these service mappings upfront in a static ConfigMap.

-- cjheppell
cloud
kubernetes
networking
tcp

2 Answers

10/12/2019

For non-HTTP, TCP-based services (e.g, a database such as PostgreSQL) how should I expose these for public consumption?

Well, that depends on how you expect the ultimate user to address those Services? As you pointed out, with an Ingress, it is possible to use virtual hosting to route all requests to the same Ingress controller, and then use the Host: header to dispatch within the cluster.

With a TCP service, such as PostgreSQL, there is no such header. So, you would necessarily have to have either an IP based mechanism, or assign each one a dedicated port on your Internet-facing IP

If your clients are IPv6 aware, assigning each Service a dedicated IP address is absolutely reasonable, given the absolutely massive IP space that IPv6 offers. But otherwise, you have two knobs to turn: the IP and the port.

From there, how you get those connections routed within your cluster to the right Service is going to depend on how you solved the first problem

-- mdaniel
Source: StackOverflow

10/29/2019

Maybe this workflow can help:

(I make the assumption that the cloud provider is AWS)

  • AWS Console: Create a segregated VPC and create your Kubernetes ec2 instances (or autoscaling group) disabling the creation of public IP. This makes it impossible to access the instance from the Internet, you still can access through the private IP (ex, 172.30.1.10) via a Site 2 Site VPN or through a secondary ec2 instance in the same VPC with Public IP.

  • Kubernetes: Create a service with a Fixed NodePort (eg 35432 for Postgres).

  • AWS console: create a Classic o Layer 4 Loadblancer inside the same VPC of your nodes, in the Listeners Tab open the port 35432 (and other ports that you might need) pointing to one or all of your nodes via a "Target Group". There is no charge in the number of ports.

At this point, I don't know how to automate the update of the current living nodes in the Load Balancer's Target Group, this maybe could be an issue with Autoscaling features, if any... Maybe a Cron job with a bash script pulling info from AWS API and update the Target Group?

-- Hugo V
Source: StackOverflow