I am currently setting up my first GCE kubernetes cluster, having previously used mainly AWS for this.
Cluster is up and running and can access a local NFS server on the same compute engine VPC via private IP, so one stage of private network connection is fine.
Cloudsql server is running and can access this fine from the cluster if I open up public ip to the world.
Have enabled private ip address on the cloudsql which looks good, but I cannot ping or connect from the same container that can reach the public ip.
Cloudsql private ip is a different subnet which I believe is to be expected. Checked VPC Network peering and got a relevant looking rule. Checked VPC routes and got the matching peering route with next hop.
I have seen in docs that private ip is still beta, so guess potential to be a glitch beyond my control. Also read up on running a proxy container inside each pod - hesitant to do this unless only option, app may end up across platforms so would prefer more standard config.
There's currently a requirement that the GKE cluster must be created with "VPC-Native" networking in order to access Cloud SQL via private IP. Unfortunately you need to re-create the cluster in order to make it VPC-Native.
https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips