I have the postgres container running in a Pod
on GKE and a PersistentVolume
set up to store the data. However, all of the data in the database is lost if the cluster reboots or if the Pod
is deleted.
If I run kubectl delete <postgres_pod>
to delete the existing Pod
and check the newly created (by kubernetes) Pod
to replace the deleted one, the respective database has not the data that it had before the Pod
being deleted.
Here are the yaml files I used to deploy postgres.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: custom-storage
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
volumeBindingMode: Immediate
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-volume-claim
spec:
storageClassName: custom-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11.5
resources: {}
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "dbname"
- name: POSTGRES_USER
value: "user"
- name: POSTGRES_PASSWORD
value: "password"
volumeMounts:
- mountPath: /var/lib/postgresql/
name: postgresdb
volumes:
- name: postgresdb
persistentVolumeClaim:
claimName: postgres-volume-claim
I double checked that the persistentVolumeReclaimPolicy
has value Retain
.
What am I missing?
Is the cluster creating a new volume each time you delete a pod? Check with kubectl get pv
.
Is this a multi-zone cluster? Your storage class is not provisionig regional disks, so you might be getting a new disk when the pod moves from one zone to another.
Possibly related to your problem, the postgres container reference recommends mounting at /var/lib/postgresql/data/pgdata
and setting the PGDATA
env variable: https://hub.docker.com/_/postgres#pgdata