Cannot connect to Private, Regional GKE endpoint from OpenVPN client

8/15/2018

I created the GKE Private Cluster via Terraform (google_container_cluster with private = true and region set) and installed the stable/openvpn Helm Chart. My setup is basically the same as described in this article: https://itnext.io/use-helm-to-deploy-openvpn-in-kubernetes-to-access-pods-and-services-217dec344f13 and I am able to see a ClusterIP-only exposed service as described in the article. However, while I am connected to the VPN, kubectl fails due to not being able to reach the master.

I left the OVPN_NETWORK setting as the default (10.240.0.0), and changed the OVPN_K8S_POD_NETWORK and subnet mask setting to the secondary range I chose when I created my private subnet that the Private Cluster lives in.

I even tried adding 10.240.0.0/16 to my master_authorized_networks_config but I'm pretty sure that setting only works on external networks (adding the external IP of a completely different OVPN server allows me to run kubectl when I'm connected to it).

Any ideas what I'm doing wrong here?

Edit: I just remembered I had to set a value for master_ipv4_cidr_block in order to create a Private Cluster. So I added 10.0.0.0/28 to the ovpn.conf file as push "route 10.0.0.0 255.255.255.240" but that didn't help. The docs on this setting states:

Specifies a private RFC1918 block for the master's VPC. The master range must not overlap with any subnet in your cluster's VPC. The master and your cluster use VPC peering. Must be specified in CIDR notation and must be /28 subnet.

but what's the implication for an OpenVPN client on a subnet outside of the cluster? How do I leverage the aforementioned VPC peering?

-- smoll
google-kubernetes-engine
openvpn
private-subnet
terraform
terraform-provider-gcp

2 Answers

4/9/2019

You can add --internal-ip to your gcloud command to put automatically the internal ip address to ~/.kube/config file

-- aslim
Source: StackOverflow

8/15/2018

Figured out what the problem is: gcloud container clusters get-credentials always writes the master's external IP address to ~/.kube/config. So kubectl always talks to that external IP address instead of the internal IP.

To fix: I ran kubectl get endpoints, noted the 10.0.0.x IP and replaced the external IP in ~/.kube/config with it. Now kubectl works fine while connected to the OVPN server inside of the Kube cluster.

-- smoll
Source: StackOverflow