I've gone ahead and used the postgres helm release to install a small "cluster" on an on premise kubernetes cluster. the installation went smoothly - we have an master instance and two slaves to which the data is being replicated (the single restart seen below is OK as it was done manually for testing).
prod-postgres-postgresql-master-0 2/2 Running 0 15h
prod-postgres-postgresql-slave-0 1/1 Running 0 16h
prod-postgres-postgresql-slave-1 1/1 Running 1 9d
These pods came with their respective services (I am using a NodePort since there is no cloud provider to add an external IP to a LoadBalancer):
prod-postgres-postgresql NodePort 10.96.119.67 <none> 5432:31920/TCP 9d
prod-postgres-postgresql-headless ClusterIP None <none> 5432/TCP 9d
prod-postgres-postgresql-metrics ClusterIP 10.106.163.49 <none> 9187/TCP 9d
prod-postgres-postgresql-read ClusterIP 10.97.58.56 <none> 5432/TCP 9d
The values used for the installation are the same as the production values on the repo with the small change of password and storage-class (for which I manually provided the needed PVs)
How do I now use this DB deployment for reading from all postgres nodes?
I understand that:
prod-postgres-postgresql
prod-postgres-postgresql-read
Since the services are different, how can I tell my app it's allowed to read from more than just the master?
If this is not supported then what is the "point" of this helm chart? Along with the lack of failover the slaves seem pointless.
To be honest I too have this question. The documentation doesn't clearly state how to handle/enable this. The only solution I've been able to come up with, which I don't think is optimal nor do I know if it's the "right" approach is to, on the app side of things (not k8s), connect to the right K8s service (master (db-postgresql
) for read/writes & slaves (db-postgresql-read
) for reads-only) based off the type of DB operation intended. Would certainly like more guidance here too. I've been looking into Stolon as an alternative.