How to run a kubernetes job with a temporary help database

10/17/2018

I am creating a Kubernetes Job, which needs a temporary database for processing its tasks. i am having 2 docker images. 1 which is the job and 1 which is a mysql database containing already the database and the data needed for the job

I want to do something like this:

apiVersion: batch/v1
kind: Job
metadata:
  name: afmigration
spec:
  backoffLimit: 3
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: afmigration
        image: migrate-job:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
        env:
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: postgres-credentials
              key: user

        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-credentials
              key: password

        - name: POSTGRES_HOST
          value: postgres-service
        - name: MYSQL_ROOT_PASSWORD
          value: root
        - name: MYSQL_DATABASE
          value: tmpdb
      - name: migration-test-data
        image: migration-test-data:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 3306
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: root
        - name: MYSQL_DATABASE
          value: tmpdb

The problem here is that the database is first not being started. Also I probably need to define a service to get access to the database from the job. But my biggest question is how can I then stop the database, when the job finishes?

Here is the dockerfile of the migration-test-data:

FROM mysql:8.0
COPY ./*.sql /docker-entrypoint-initdb.d/

Here is the dockerfile for the afmigration, which is the job:

FROM frolvlad/alpine-oraclejdk8:slim

ARG JAR_FILE

COPY target/${JAR_FILE} /app/afmigration.jar

WORKDIR /app

ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app/afmigration.jar"]
-- Safari
kubernetes

1 Answer

10/17/2018

MySQL database should start normally so it's possible that you have some problem in your docker image. As you are within the same POD you can access your MySQL using localhost as all containers within the same POD share the same network stack.

So making sidecar containers in jobs is not supported now in Kubernetes as you can see in this issue: https://github.com/kubernetes/kubernetes/issues/25908

but I would suggest to take a look on that bash-magic pointed in this comment: https://github.com/kubernetes/kubernetes/issues/25908#issuecomment-308569672

containers:
  - name: main
    image: gcr.io/some/image:latest
    command: ["/bin/bash", "-c"]
    args:
      - |
    trap "touch /tmp/pod/main-terminated" EXIT
    /my-batch-job/bin/main --config=/config/my-job-config.yaml
    volumeMounts:
      - mountPath: /tmp/pod
    name: tmp-pod
  - name: envoy
    image: gcr.io/our-envoy-plus-bash-image:latest
    command: ["/bin/bash", "-c"]
    args:
      - |
    /usr/local/bin/envoy --config-path=/my-batch-job/etc/envoy.json &
    CHILD_PID=$!
    (while true; do if [[ -f "/tmp/pod/main-terminated" ]]; then kill $CHILD_PID; fi; sleep 1; done) &
    wait $CHILD_PID
    if [[ -f "/tmp/pod/main-terminated" ]]; then exit 0; fi
    volumeMounts:
      - mountPath: /tmp/pod
    name: tmp-pod
    readOnly: true
volumes:
  - name: tmp-pod
    emptyDir: {}

So it simply run non-ending program in background (this example is envoy) and look into shared directory where batch job will write information that it's ended. So sidecar will react on that information and kill non-ending process.

-- Jakub Bujny
Source: StackOverflow