I have a big (300Gb) Postgres DB running on GKE cluster (Stateful Set, SSD Volume). I need to move this DB to another GKE cluster.
What is the easiest way to accomplish it?
I tried to do it with piping pg_dump
/pg_restore
, but it takes forever and for some reason, not all constraints/triggers were recreated.
Is there any proper way to gracefully "shutdown" Postgres server running in Kubernetes and copy the /pgdata
folder directly (from one volume to another)?
Other ideas? tnx
I got few ideas (listed from the most probable to the least) about how you could approach this:
Remember to use proper format when using pg_dump
. Default plain format may not work properly with pg_restore
. Try to specify different format with pg_dump
or use psql -f xxx.tar
instead of pg_restore
. Remember that it might take a while.
You can use a tool to assist you with that. For example pghoard.
You can make a tared backup of you DB and try to copy as a object via Google Cloud Storage.
You can try to create PVCs manually, attach pods to those PVCs and than copy your dataset onto those pods.
Finally, you may try to create an Init container and use it later for your new cluster.
I suggest starting from point 1 as I think it is the most possible solution. If that would not be enough, try later points from the list.
Please let me know if that helped.