Creating a high available and heavy usable Database

9/14/2018

Currently, I have an application consisting of a backend, frontend, and database. The Postgres database has a table with around 60 million rows. This table has a foreign key to another table: categories. So, if want to count—I know it's one of the slowest operations in a DB—every row from a specific category, on my current setup this will result in a 5-minute query. Currently, the DB, backend, and frontend a just running on a VM.

I've now containerized the backend and the frontend and I want to spin them up in Google Kubernetes Engine.

So my question, will the performance of my queries go up if you also use a container DB and let Kubernetes do some load balancing work, or should I use Google's Cloud SQL? Does anyone have some experience in this?

-- DonKanallie
docker
google-cloud-platform
google-cloud-sql
kubernetes
postgresql

1 Answer

9/14/2018

will the performance of my queries go up if you also use a container DB

Raw performance will only go up if the capacity of the nodes (larger nodes) is larger than your current node. If you use the same node as a kubernetes node it will not go up. You won't get benefits from containers in this case other than maybe updating your DB software might be a bit easier if you run it in Kubernetes. There are many factors that are in play here, including what disk you use for your storage. (SSD, magnetic, clustered filesystem?).

Say if your goal is to maximize resources in your cluster by making use if that capacity when say not many queries are being sent to your database then Kubernetes/containers might be a good choice. (But that's not what the original question is)

should I use Google's Cloud SQL

The only reason I would use Cloud SQL is that if you want to offload managing your SQL db. Other than that you'll get similar performance numbers than running in the same size instance on GCE.

-- Rico
Source: StackOverflow