How to reach hosted postgres in GCP from Kubernetes cluster, directly to private IP

5/14/2019

So, I created a postgreSQL instance in Google Cloud, and I have a Kubernetes Cluster with containers that I would like to connect to it. I know that the cloud sql proxy sidecar is one method, but the documentation says that I should be able to connect to the private IP as well.

I notice that a VPC peering connection was automatically created for me. It's set for a destination network of 10.108.224.0/24, which is where the instance is, with a "Next hop region" of us-central1, where my K8s cluster is.

And yet when I try the private IP via TCP on port 5432, I time out. I see nothing in the documentation about have to modify firewall rules to make this work, but I tried that anyway, finding the firewall interface in GCP rather clumsy and confusing compared with writing my own rules using iptables, but my attempts failed.

Beyond going to the cloud sql sidecar, does anyone have an idea why this would not work?

Thanks.

-- Michael Soulier
google-cloud-platform
google-cloud-sql
kubernetes
postgresql

3 Answers

5/22/2019

In the end, the simplest thing to do was to just use the google cloud sql proxy. As opposed to a sidecar, I have multiple containers needing db access so I put the proxy into my cluster as its own container with a service, and it seems to just work.

-- Michael Soulier
Source: StackOverflow

5/15/2019

If your instance of cloud SQL or compute both in the same VPC then only you can create a VPC peering over private IP.

From cloud SQL compute VM you can choose the VPC and subnet and also setup same for the GKE and you can make the connection from pod to cloud sql.

-- Harsh Manvar
Source: StackOverflow

5/14/2019

Does your GKE cluster meet the environment requirements for private IP? It needs to be a VPC enabled cluster on the same VPC and region as your Cloud SQL instance.

-- kurtisvg
Source: StackOverflow