Access external database resource wrapped in service without selector

7/3/2019

I created a managed Postgres database in Google Cloud. This database got a external IP address. In a second step I created a Kubernetes cluster. In the k8s I want access this external database. Therefore I created a service without label selector but with a external endpoint pointing to my Postgres-database.

I had to allow the Postgres database to get access from the (three) cluster nodes. I configured that in the Google Cloud Console (SQL).

My first question: Is this the right way to integrate an external database? Especially this IP access configuration?

To test my connection against the database my first try was to establish a port-forwarding from my local host. My idea was to access this database via my Database-IDE(datagrip). However when trying to establish a port forwarding I get the following error:

error: cannot attach to *v1.Service: invalid service 'postgres-bla-bla': Service is defined without a selector

Second question: How to access this service locally?

In a third step I created a pod with 'partlab/ubuntu-postgresql' docker-image. I did a 'kctrl exec -it ... ' and could access my Postgres database with

psql -h postgres-bla-bla ...

So basically it works. But I'm sure my solution has some flaws. What can I do better? How to fix the problem from question 2?

-- Thomas Seehofchen
google-cloud-platform
kubernetes
postgresql

2 Answers

7/4/2019

While is fine to access services within the cluster using a service without selectors, an alternative approach that might fit your particular scenario would be using an ExternalName Service:

Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra

Additionally and considering that you're using a Cloud SQL on GCP, a more reliable way to connect to the database without having to whitelist the node IP addresses would be using Cloud SQL proxy:

The Cloud SQL Proxy provides secure access to your Cloud SQL Second Generation instances without having to whitelist IP addresses or configure SSL.

Regarding your second question, since the service that is currently connected to the database exists only within the cluster, you'll need to get access to the cluster network in order to be able to reach whatever external endpoint is mapped into it.

As you mentioned, executing a pod's shell session allows you to reach your PostgreSQL service since the pod is within the cluster network and can communicate with the service. You can expose the database service using any Kubernetes service exposing method and then, use your local client to hit the exposed service so that it relays the communication to your database.

-- yyyyahir
Source: StackOverflow

9/20/2019

The problem was discussed here and there is a solution to set up port forwarding to a service without selector/pod (e.g. ExternalName service) by deploying a proxy pod inside K8s:

kubectl -n production run mysql-tunnel-$USER -it --image=alpine/socat --tty --rm --expose=true --port=3306 tcp-listen:3306,fork,reuseaddr tcp-connect:your-internal-mysql-server:3306
kubectl -n production port-forward svc/mysql-tunnel-$USER 3310:3306

In the example above the MySQL server at your-internal-mysql-server:3306 will be available on localhost:3310 on your machine.

-- Aldekein
Source: StackOverflow