How to make GKE clusters communicate using private IPs or networks?

11/12/2019

Is there any way to make GKE clusters (in the same network) talk to each other using internal IP addresses?

I know GKE internal load balancers can handle traffic only from the same network and the same region. And I find this implementation very strange.

I understand pod IPs are routable but they are not static and can change anytime. Also, I know there is loadBalancerSourceRanges configuration option in external load balancers using which I can allow only the subnets I want but what if I want to keep every communication using internal and not using a public IP?

Is there any way to achieve what I am trying? Like configuring the Firewall Rules or anything else? or "Global routing mode" while creating the VPC network or anything?

-- Amit Yadav
google-cloud-platform
google-kubernetes-engine
internal-load-balancer
load-balancing

1 Answer

11/13/2019

If you have 2 clusters in 2 different regions and you want them to communicate using internal IPs, your best option is to use nodePort service in the clusters to expose your pods and then configure a VM instance to act as a proxy for each cluster.

This will have the same effect as using the LoadBalancer service as an internal Load Balancer but it has the benefit that it will work across multiple regions. It also allows the same Load Balancer to handle requests for all your services.

The one thing you need to be careful of is overloading the proxy instance. Depending on the number of requests, you may need to configure multiple proxy instances for the cluster, each one only handline a handful of services.

-- Patrick W
Source: StackOverflow