Kubernetes postgres replication with pgpool

11/19/2019

Trying to deploy postgres in kubernetes (https://github.com/paunin/PostDock/tree/master/k8s/example2-single-statefulset),

  1. Used pgpool-II(2 replicas) along with 1 postgres-master pod & 2 slave pods.
kubectl get pods -n postgres
NAME                                       READY   STATUS    RESTARTS   AGE
psql-db-pgpool-8****c-7**k             1/1     Running   0          35d
psql-db-pgpool-8****c-m**5             1/1     Running   0          35d
psql-db-node-0                         1/1     Running   0          35d
psql-db-node-1                         1/1     Running   0          35d
psql-db-node-2                         1/1     Running   0          20h
  1. Created a user "test" from the master postgres db with postgres user.
  2. When trying to connect from within the pods(node-0), the authentication is successful, with user "test".
root@postgres-db-node-0:/# psql -h localhost postgres -U test
psql (11.4 (Debian 11.4-1.pgdg90+1))
Type "help" for help.

postgres=> \l
  1. When trying to connect with NodePort IP & NodePort of the kubernetes cluster, the new user "test" fails authentication with pool_passwd file does not contain an entry for "test"
psql -h NODE_IP -U test -d postgres --port NODE_PORT
psql: FATAL:  md5 authentication failed
DETAIL:  pool_passwd file does not contain an entry for "test"
  1. Logged in to the pgpool-II pod to find out
root@psql-db-pgpool-8****c-7**k:/# cat /usr/local/etc/pool_passwd
user1:md5****422f
replica_user:md5****3

The new user "test" created at the database is not reflected at the pgpool. Does it work this way, to create & update pgpool everytime a new user is created? Or am I missing something for this user update.

-- Sandy
authentication
database-replication
kubernetes
pgpool
postgresql

1 Answer

11/20/2019

The postgres example You deployed uses secret object to store user and password credentials. And this is the recommended way of managing sensitive data in kubernetes deployments.

There are following instructions in this example:

  • Create namespace by kubectl create -f ./namespace/
  • Create configs: kubectl create -f ./configs/
  • Create volumes kubectl create -f ./volumes/
  • Create services kubectl create -f ./services/
  • Create nodes kubectl create -f ./nodes/
  • Create pgpool kubectl create -f ./pgpool/

If You followed them in correct order, the mysystem-secret secret object is created when kubectl create -f ./configs/ is called from configs/secret.yml.

apiVersion: v1
kind: Secret
metadata:
  namespace: mysystem
  name: mysystem-secret
type: Opaque
data:
  app.db.user: d2lkZQ== #wide
  app.db.password: cGFzcw== #pass
  app.db.cluster.replication.user: cmVwbGljYV91c2Vy #replica_user
  app.db.cluster.replication.password: cmVwbGljYV9wYXNz #replica_pass
  app.db.pool.users: d2lkZTpwYXNz #wide:pass
  app.db.pool.pcp.user: cGNwX3VzZXI= #pcp_user
  app.db.pool.pcp.password: cGNwX3Bhc3M= #pcp_pass

Note that the comments next to each encoded password is decoded password so in production setting it should be avoided.

Then the user and password credentials from mysystem-secret are used in kubectl create -f ./nodes/ and kubectl create -f ./pgpool/ as environmental values that are in all replicas and can be used to connect to Database.

...
            - name: "POSTGRES_USER"
              valueFrom:
                secretKeyRef:
                  name: mysystem-secret
                  key: app.db.user
            - name: "POSTGRES_PASSWORD"
              valueFrom:
                secretKeyRef:
                  name: mysystem-secret
                  key: app.db.password
...

If You want to use Your own user and password You need to modify the configs/secret.yml file and replace passwords you wish to modify with base64 encoded passwords.

You can easily encode any password to base64 with following command:

echo -n 'admin' | base64
YWRtaW4=
echo -n '1f2d1e2e67df' | base64
MWYyZDFlMmU2N2Rm

Update:

To add additional users that would work with pgpool after cluster deployment you can use tool postgres-operator. Users added manually via exec to pod and then created locally would not be propagated to other nodes.

Follow these instructions to install Postgres Operator (pgo client) and configure it to work with kubernetes.

-- Piotr Malec
Source: StackOverflow