Kubernetes - Pod which encapsulates DB is crashing

6/18/2018

I am experiencing issues when I try to deploy my Django application to Kubernetes cluster. More specifically, when I try to deploy PostgreSQL.

Here is what my .YML deployment file looks like:

apiVersion: v1
kind: Service
metadata:
  name: postgres-service
spec:
  selector:
    app: postgres-container
    tier: backend
  ports:
    - protocol: TCP
      port: 5432
      targetPort: 5432
  type: ClusterIP
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-pv
  labels:
      type: local
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 2Gi
  hostPath:
    path: /tmp/data/persistent-volume-1 #U okviru cvora n
  persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pv-claim
  labels:
    type: local
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres-container
      tier: backend
  template:
    metadata:
      labels:
        app: postgres-container
        tier: backend
    spec:
      containers:
        - name: postgres-container
          image: postgres:9.6.6
          env:
            - name: POSTGRES_USER
              valueFrom:
                secretKeyRef:
                  name: postgres-credentials
                  key: user

            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres-credentials
                  key: password

            - name: POSTGRES_DB
              value: agent_technologies_db
          ports:
            - containerPort: 5432
          volumeMounts:
            - name: postgres-volume-mount
              mountPath: /var/lib/postgresql/data/db-files

      volumes:
        - name: postgres-volume-mount
          persistentVolumeClaim:
            claimName: postgres-pv-claim
        - name: postgres-credentials
          secret:
            secretName: postgres-credentials

Here is what I get when I run kubectl get pods command :

NAME                                             READY     STATUS             RESTARTS   AGE
agent-technologies-deployment-7c7c6676ff-8p49r   1/1       Running            0          2m
agent-technologies-deployment-7c7c6676ff-dht5h   1/1       Running            0          2m
agent-technologies-deployment-7c7c6676ff-gn8lp   1/1       Running            0          2m
agent-technologies-deployment-7c7c6676ff-n9qql   1/1       Running            0          2m
postgres-8676b745bf-8f7jv                        0/1       CrashLoopBackOff   4          3m

And here is what I get when I try to inspect what is going on with PostgreSQL deployment by using kubectl logs $pod_name:

initdb: directory "/var/lib/postgresql/data" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "/var/lib/postgresql/data" or run initdb
with an argument other than "/var/lib/postgresql/data".
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

Note: I am using Google Cloud as a provider.

-- ─ćepa
google-cloud-platform
google-kubernetes-engine
kubernetes

1 Answer

6/18/2018

You can't have your db in /var/lib/postgres/data/whatever.

Change that path by /var/lib/postgres/whatever and it will work.

17.2.1. Use of Secondary File Systems

Many installations create their database clusters on file systems (volumes) other than the machine's "root" volume. If you choose to do this, it is not advisable to try to use the secondary volume's topmost directory (mount point) as the data directory. Best practice is to create a directory within the mount-point directory that is owned by the PostgreSQL user, and then create the data directory within that. This avoids permissions problems, particularly for operations such as pg_upgrade, and it also ensures clean failures if the secondary volume is taken offline.

And, by the way, I had to create a secret, as it is not in the post:

apiVersion: v1
kind: Secret
metadata:
  name: postgres-credentials
type: Opaque
data:
  user: cG9zdGdyZXM=            #postgres
  password: cGFzc3dvcmQ=        #password

Note that the username needs to be "postgres". I don't know if you are covering this...

-- suren
Source: StackOverflow