Access private GKE clusters from a different region?

1/3/2020

I have created a GKE cluster using the below command:

gcloud beta container clusters create "cluster-asia-south1" \
    --region "asia-south1" \
    --project "project123" \
    --cluster-version "1.14.8-gke.12" \
    --machine-type "n1-standard-1" \
    --image-type "COS" --disk-type "pd-standard" --disk-size "100" \
    --scopes "https://www.googleapis.com/auth/cloud-platform" \
     --num-nodes "1" \
    --no-enable-basic-auth \
    --metadata disable-legacy-endpoints=true \
    --max-pods-per-node "110" --enable-stackdriver-kubernetes \
    --enable-ip-alias \
    --network "projects/project123/global/networks/default" \
    --subnetwork "projects/project123/regions/asia-south1/subnetworks/default" \
    --default-max-pods-per-node "110" \
    --addons HorizontalPodAutoscaling,HttpLoadBalancing \
    --no-enable-autoupgrade \
    --no-enable-autorepair \
    --node-locations asia-south1-a,asia-south1-b

I understand this cluster can be accessed from the VMs inside asia-south1 region (e.g gcp-vm-asia-south1-a).

Hence I installed an OpenVPN server in this VM (gcp-vm-asia-south1-a). Now when I connect to this VM from my local system, I am able to access the clusters master endpoint and below command works fine:

gcloud container clusters get-credentials "cluster-asia-south1" --region "asia-south1"

And then kuebctl get pods works fine and I am able to connect via Helm as well.

Suppose I have two more clusters in the same VPC but different regions (say cluster-us-central1 and cluster-us-west1). How do I use the same OpenVPN server to access these clusters as well?

I understand if I set up one OpenVPN server per region I will be able to connect to the respective VPN server and GKE cluster from that region will be accessible without a problem.

But I do not wanna manage three OpenVPN servers one in each region. Managing a bastion hots a few iptables or forwarding rules or something similar should be fine.

The idea is to keep one OpenVPN server for one VPC no matter how many regions are there. Is this feasible somehow, is there any way to do this?

I tried adding VMs, subnets, and client's private IP ranges in --master-authorized-networks but nothing works ( I think because they all are from different regions).

-- Amit Yadav
google-cloud-platform
google-kubernetes-engine
kubectl
kubernetes-helm
kubernetes-ingress

2 Answers

1/3/2020

Did you use --enable-master-authorized-networks flag with --master-authorized-networks as it was mentioned in the documentation? Did you check masterAuthorizedNetworksConfig: with command gcloud container clusters describe [CLUSTER_NAME]?

Do you have any firewall rules that could restrict access to other clusters from your OpenVPN server?

EDIT The cause of you problem with connectivity could be using subnets from different regions: "A VPC network is a global resource, but individual subnets are regional resources" and "Regional resources are accessible by any resources within the same region".

-- Serhii Rohoza
Source: StackOverflow

2/3/2020

I followed this blog from GCP to deploy the proxy and there is another VM in the same region with OpenVPN server deployed on it.

I connect my local machine to the OpenVPN server and change my proxy using https_proxy=LOCAD_BALANCER_IP:PORT variable shown in the blog.

Now my local machine is able to interact with the Master IP of the GKE cluster as the master thinks the request is coming from the proxy service deployed inside the cluster. This proxy service thinks the request is not coming from outside the region but from the OpenVPN server (VM) in the same region and VPC.

-- Amit Yadav
Source: StackOverflow