Container orchestrated autoscaling deployments of Django opens too many connections with the database

8/1/2019

I have deployed Django on Kubernetes. The node pool is configured to allow up to 1000 pods to serve traffic.

As the number of pods increase, my database starts to throw

"Sorry, too many clients".

The database is PostgreSQL, and outside ofK8 - on a standalone Linux machine.

As a temporary measure, I have increased max_connections for my database. This is not the answer I am looking for.

I am trying to understand why this issue comes up in the first place - does Django open database connections and keep it open? Would 1000 pods cause 1000 live connections to my database?

I am also trying to understand the different ways to approach this problem.

Do I need to change something with my Django config to reduce the number of connections? Or do I need to change my database layer to support a higher number of connections?

How do web deployments on container orchestration systems usually deal with this problem?

-- Naved Khan
database
django
kubernetes

2 Answers

8/1/2019

Did you try to put a PgBouncer in your cluster?

That way you can create a new deployment and limit the number of replicas, this will allows you to limit the number of connections but still allow the app to continue scalling.

Here is one I already used: https://github.com/edoburu/docker-pgbouncer . There is a Docker hub associated.

-- night-gold
Source: StackOverflow

8/1/2019

I did an Internet search for "django database connection" and the first result was this page which says:

Persistent connections avoid the overhead of re-establishing a connection to the database in each request. They’re controlled by the CONN_MAX_AGE parameter which defines the maximum lifetime of a connection. It can be set independently for each database.

The default value is 0, preserving the historical behavior of closing the database connection at the end of each request. To enable persistent connections, set CONN_MAX_AGE to a positive number of seconds. For unlimited persistent connections, set it to None.

It also says:

each thread maintains its own connection

So if you have P pods each with T threads, then you could have up to P*T open connections from your Django app to your PG database depending on the number of requests served that have not yet completed in the last CONN_MAX_AGE seconds.

One thing you can look into is using connection pooling, so that one connection can be shared between threads. This has its tradeoffs as well (e.g. this), you can search the Internet to learn more.

Even if you set CONN_MAX_AGE to 0 (or don't set it at all, because 0 is the default) and use pooling, if you have P pods and a lot of inbound traffic such that you get (at least) P concurrent requests you need to handle, your database will receive P concurrent connections.

One option in this case is to just ensure your DB is configured to receive that many concurrent connections. You could get more sophisticated and consider whether the majority of your incoming requests only need read access to the DB; in this case you could set up multiple read replicas of your database and set things up such that read requests from your Django app are balanced amongst the read replicas; in this case each replica doesn't need to support the full P concurrent max connections.

-- Amit Kumar Gupta
Source: StackOverflow