How to keep long lived connection alive even when the Kubernetes pod get killed?

11/13/2021

I have the following architecture for the PostgreSQL cluster:

enter image description here

Here, there are multiple clients that interacts with PostgreSQL pods via pgpool, the issue is, when the pod (could be pgpool or PostgreSQL pod) terminates (for multiple reasons) the client is getting impacts and have to recreate the connection. For example in this diagram, if postgresql-1 pod terminates then client-0 will have to recreate the connection with the cluster.

Is there a way in kubernetes to handle it so that connections to pgpool k8s service are load balanced/ recreated to other pods so that the clients do not see the switch over and are not impacted?

Please note these are TCP connections and not HTTP connections (which are stateless). Also, all the PostgreSQL pods are always in sync with remote_apply.

-- Vishrant
high-availability
kubernetes
kubernetes-ingress
pgpool
postgresql

2 Answers

11/13/2021

Without substantial custom code to allow TCP connection transfers between hosts, you kind of can't. When a process shuts down, all TCP streams it has open will be closed, that's how normal Linux networking functions. If you poke a round on your search engine of choice for things like "TCP connection migration" you'll find a lot of research efforts on this but little actual code. More often you just terminate the TCP connection at some long-lived edge proxy and if that has to restart you eat the reconnects.

-- coderanger
Source: StackOverflow

11/14/2021

Is there a way in kubernetes to handle it so that connections to pgpool k8s service are load balanced/ recreated to other pods...

Connections to pgpool k8s service is load balance by kube-proxy. Endpoints (pgpool pods) that back the service will automatically be update whenever there's a change (eg. scaling) in pods population.

...so that the clients do not see the switch over and are not impacted?

Should the pgpool pod that the client connected gets terminated, the client tcp state become invalid (eg. void remote IP). There's no need to keep such connection alive but re-connect to the pgpool service where kube-proxy will route you to next available pgpool pod. The actual connection to backend database is managed by pgpool including database failover. With pgpool as the proxy, you do not need to worry about database switching.

-- gohm'c
Source: StackOverflow