Kubernetes + Django / PostgreSQL - How do I specify HOST of my PostgreSQL Database when I deploy it to Kubernetes Cluster

5/24/2018

I am having a lot of issues configuring My Dockerized Django + PostgreSQL DB application to work on Kubernetes Cluster, which I have created using Google Cloud Platform.

How do I specify DATABASES.default.HOST from my settings.py file when I deploy image of PostgreSQL from Docker Hub and an image of my Django Web Application, to the Kubernetes Cluster?

Here is how I want my app to work. When I run the application locally, I want to use SQLITE DB, in order to do that I have made following changes in my settings.py file:

if(os.getenv('DB')==None):
    print('Development - Using "SQLITE3" Database')
    DATABASES = {
        'default':{
            'ENGINE': 'django.db.backends.sqlite3',
            'NAME': os.path.join(BASE_DIR,'db.sqlite3'),
        }
    }
else:
    print('Production - Using "POSTGRESQL" Database')
    DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.postgresql_psycopg2',
            'NAME': 'agent_technologies_db',
            'USER': 'stefan_radonjic',
            'PASSWORD': 'cepajecar995',
            'HOST': ,  #???
            'PORT': ,  #???
            }
    }

The main idea is that when I deploy application to Kubernetes Cluster, inside of Kubernetes Pod object, a Docker container ( my Dockerized Django application ) will run. When creating a container I am also creating Environment Variable DB and setting it to True. So when I deploy application I use PostgreSQL Database .

NOTE: If anyone has any other suggestions how I should separate Local from Production development, please leave a comment.

Here is how my Dockerfile looks like:

FROM python:3.6

ENV PYTHONUNBUFFERED 1
RUN mkdir /agent-technologies
WORKDIR /agent-technologies
COPY . /agent-technologies 
RUN pip install -r requirements.txt

EXPOSE 8000

And here is how my docker-compose file looks like:

version: '3'
services:
  web:
    build: .
    command: python src/manage.py runserver --settings=agents.config.settings
    volumes: 
      - .:/agent-technologies
    ports: 
      - "8000:8000"
    environment:
      - DB=true

When running application locally it works perfectly fine. But when I try to deploy it to Kubernetes cluster, Pod objects which run my application containers are crashing in an infinite loop, because I dont know how to specify the DATABASES.default.HOST when running app in production environment. And of course the command specified in docker-compose file (command: python src/manage.py runserver --settings=agents.config.settings) probably produces an exception and makes the Pods crash in infinite loop.

NOTE: I have already created all necessary configuration files for Kubernetes ( Deployment definitions / Services / Secret / Volume files ). Here is my github link: https://github.com/StefanCepa/agent-technologies-bachelor

Any help would be appreciated! Thank you all in advance!

-- Stefan Radonjic
django
docker
kubernetes
postgresql

2 Answers

5/24/2018

You will have to create a service (cluster ip) for your postgres pod to make it "accessible". When you create a service, you can access it via <service name>.default:<port>. However, running postgres (or any db) as a simple pod is dangerous (you will loose data as soon as you or k8s re-creates the pod or scale it up). You can use a service or install it properly using statefulSets.

Once you have the address, you can put it in env variable and access it from your settings.py

EDIT: Put this in your deployment yaml (example):

env:
- name: POSTGRES_HOST
  value: "postgres-service.default"
- name: POSTGRES_PORT
  value: "5432"
- name: DB
  value: "DB"

And in your settings.py

'USER': 'stefan_radonjic',
'PASSWORD': 'cepajecar995',
'HOST': os.getenv('POSTGRES_HOST'),
'PORT': os.getenv('POSTGRES_PORT'),
-- Amrit Bera
Source: StackOverflow

5/26/2018

Below are my findings:

  1. The postgres instances was depending on a persistant volume. I see the code for the persistent volueme claim, but not persistent volume itself. So I had to create this first.
apiVersion: v1
kind: PersistentVolume
metadata:
  labels:
    type: local
  name: task-pv-volume
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 1Gi
  hostPath:
    path: /tmp/data
  persistentVolumeReclaimPolicy: Retain
  1. There is a type in agent-technologies-bachelor/agents/config/kubernetes/postgres/secrets-definition.yml term password.
data:
  user: c3RlZmFuX3JhZG9uamlj #stefan_radonjic
  passowrd: sdfsdfsd #cepajecar995

Because of this the postgres instance was not able to startup. I found this by looking into the events by running command kubectl describe pods

  1. The Docker image didn't have a command to execute the application in it. As a result, If I ran your docker image at cepa995/agents_web it would simply exit and not run any application. This is why the django application was not running. To fix this, I modified the Dockerfile to add a CMD instruction at the end. I see you put this in the docker-compose to build the image, but this command has to be inside the Dockerfile itself. The Dockerfile looks like this now:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /agent-technologies
WORKDIR /agent-technologies
COPY . /agent-technologies
RUN pip install -r src/requirements.txt
EXPOSE 8000
CMD python src/manage.py runserver 0.0.0.0:8000 --settings=agents.config.settings
-- mumshad
Source: StackOverflow