Job with multiple containers never succeeds

2/8/2017

I'm running Kubernetes in a GKE cluster and need to run a DB migration script on every deploy. For staging this is easy: we have a permanent, separate MySQL service with its own volume. For production however we make use of GCE SQL, resulting in the job having two containers - one more for the migration, and the other for Cloud Proxy.

Because of this new container, the job always shows as 1 active when running kubectl describe jobs/migration and I'm at a complete loss. I have tried re-ordering the containers to see if it checks one by default but that made no difference and I cannot see a way to either a) kill a container or b) check the status of just one container inside the Job.

Any ideas?

-- J Young
containers
jobs
kubernetes
kubernetes-pod

4 Answers

2/8/2017

You haven't posted enough details about your specific problem. But I'm taking a guess based on experience.

TL;DR: Move your containers into separate jobs if they are independent.

--

Kubernetes jobs keep restarting till the job succeeds. A kubernetes job will succeed only if every container within succeeds.

This means that your containers should be return in a restart proof way. Once a container sucessfully runs, it should return a success even if it runs again. Otherwise, say container1 is successful, container2 fails. Job restarts. Then, container1 fails (because it has already been successful). Hence, Job keeps restarting.

-- iamnat
Source: StackOverflow

9/23/2019

The reason is the container/process never terminates.

One possible work around is: move the cloud-sql-proxy to it's own deployment - and add a service in front of that. Hence your job won't be responsible for running the long running cloud-sql-proxy and hence will terminate / complete.

-- Chris Stryczynski
Source: StackOverflow

2/8/2017

each Pod can be configured with a init container which seems to be a good fit for your issue. So instead of having a Pod with two containers which have to run permanently, you could rather define a init container to do your migration upfront. E.g. like this:

apiVersion: v1
kind: Pod
metadata:
  name: init-container
  annotations:
    pod.beta.kubernetes.io/init-containers: '[
        {
            "name": "migrate",
            "image": "application:version",
            "command": ["migrate up"],
        }
    ]'
spec:
  containers:
  - name: application
    image: application:version
    ports:
    - containerPort: 80
-- pagid
Source: StackOverflow

2/15/2018

I know it's a year too late, but best practice would be to run single cloudsql proxy service for all app's purposes, and then configure DB access in app's image to use this service as a DB hostname.

This way you will not require putting cloudsql proxy container into every pod which uses DB.

-- Oleksii Donoha
Source: StackOverflow