Connect to GCE Cloudsql instance via private IP

11/3/2018

I am currently setting up my first GCE kubernetes cluster, having previously used mainly AWS for this.

Cluster is up and running and can access a local NFS server on the same compute engine VPC via private IP, so one stage of private network connection is fine.

Cloudsql server is running and can access this fine from the cluster if I open up public ip to the world.

Have enabled private ip address on the cloudsql which looks good, but I cannot ping or connect from the same container that can reach the public ip.

Cloudsql private ip is a different subnet which I believe is to be expected. Checked VPC Network peering and got a relevant looking rule. Checked VPC routes and got the matching peering route with next hop.

I have seen in docs that private ip is still beta, so guess potential to be a glitch beyond my control. Also read up on running a proxy container inside each pod - hesitant to do this unless only option, app may end up across platforms so would prefer more standard config.

-- Mark Walker
google-cloud-sql
google-compute-engine
google-kubernetes-engine

1 Answer

11/4/2018

There's currently a requirement that the GKE cluster must be created with "VPC-Native" networking in order to access Cloud SQL via private IP. Unfortunately you need to re-create the cluster in order to make it VPC-Native.

https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips

-- Vadim
Source: StackOverflow