Kubernetes traffic with IP masquerading within private network

2/24/2020

I would like my Pods in Kubernetes to connect to other process outside the cluster but within the same VPC (on VM or BGP propagated network outside). As I'm running the cluster on GCP, outgoing traffic from Kubernetes cluster can be NAT'ed with Cloud NAT for external traffic, but the traffic inside the same VPC does not get NAT'ed.

I can simply connect with the private IP, but there are some source IP filtering in place for some of the target processes. They are not maintained by myself and need to run on VM or other network, I'm trying to see if there is any way to IP masquerade traffic that's leaving the Kubernetes cluster even within the same VPC. I thought of possibly getting a static IP somehow assigned to Pod / Statefulset, but that seems to be difficult (and does not seem right to bend Kubernetes networking even if it was somehow possible).

Is there anything I could do to handle the traffic requirements from Kubernetes? Or should I be looking to make a NAT separately outside the Kubernetes cluster, and route traffic through it?

-- Ryota
google-kubernetes-engine
kubernetes
nat
networking

1 Answer

2/25/2020

I think that a better option is configure an Internal TCP/UDP Load Balancing.

Internal TCP/UDP Load Balancing makes your cluster's services accessible to applications outside of your cluster that use the same VPC network and are located in the same Google Cloud region. For example, suppose you have a cluster in the us-west1 region and you need to make one of its services accessible to Compute Engine VM instances running in that region on the same VPC network.

-- ginerama
Source: StackOverflow