Run kubectl cronjob reusing a deployment template

10/4/2019

I have a pod with 2 containers: a django webserver, and a cloud sql proxy.

I want to run a cronjob every day (some django manage.py command). Ideally, I'd like a new container to be created in one of my running pods, by copying the webserver already running there.

  1. Find pod A
  2. Copy django container from pod A
  3. Start new django container in pod A
  4. execute command in new container of pod A
  5. shut down new container of pod A

From my understanding, executing a kubernetes CronJob will create a new pod of its own. That means I need to copy everything, including volumes and proxy container. I tried to do that manually (by copypasting all the pod conf from the deployment into the CronJob conf)

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: SomeName
  labels:
    environment: SomeEnv
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: SomeApp
        name: SomeName2
        environment: SomeEnv
    spec:
      containers:
        - image: gcr.io/org/someimage:tag
          name: ContainerName
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: app-secrets
              mountPath: /var/run/secrets/app
              readOnly: true
          env:
            - name: SECRET_KEY
              valueFrom:
                secretKeyRef:
                  name: app-secrets
                  key: django
        - image: gcr.io/cloudsql-docker/gce-proxy:1.11
          name: cloudsql-proxy
          command: ["/cloud_sql_proxy", "--dir=/cloudsql",
                    "-instances=org:zone:db=tcp:5432",
                    "-credential_file=/secrets/cloudsql/credentials.json"]
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
            - name: ssl-certs
              mountPath: /etc/ssl/certs
            - name: cloudsql
              mountPath: /cloudsql

      volumes:
        - name: SomeName-secrets
          secret:
            secretName: app-secrets

---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: SomeName-Cron
  labels:
    environment: SomeEnv
spec:
  schedule: "0 1 * * *"  # Daily at 1am
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - image: gcr.io/org/someimage:tag
              name: ContainerName
              imagePullPolicy: IfNotPresent
              volumeMounts:
                - name: app-secrets
                  mountPath: /var/run/secrets/app
                  readOnly: true
              env:
                - name: SECRET_KEY
                  valueFrom:
                    secretKeyRef:
                      name: app-secrets
                      key: django
            - image: gcr.io/cloudsql-docker/gce-proxy:1.11
              name: cloudsql-proxy
              command: ["/cloud_sql_proxy", "--dir=/cloudsql",
                        "-instances=org:zone:db=tcp:5432",
                        "-credential_file=/secrets/cloudsql/credentials.json"]
              volumeMounts:
                - name: cloudsql-instance-credentials
                  mountPath: /secrets/cloudsql
                  readOnly: true
                - name: ssl-certs
                  mountPath: /etc/ssl/certs
                - name: cloudsql
                  mountPath: /cloudsql
          volumes:
            - name: SomeName-secrets
              secret:
                secretName: app-secrets

But the cloud_sql proxy somehow fails to connect within the Cronjob conf:

2019/10/04 08:14:44 New connection for "org:zone:db"                         
2019/10/04 08:14:44 Throttling refreshCfg(org:zone:db): it was only called 44
5.482222ms ago                                                                                          
2019/10/04 08:14:44 couldn't connect to "org:zone:db": Post https://www.googl
eapis.com/sql/v1beta4/projects/org/instances/block-report/createEphemeral?alt=json: oauth2: c
annot fetch token: Post https://accounts.google.com/o/oauth2/token: dial tcp: i/o timeout               
^C                                                                                        

These errors are really confusing, so I'm stuck with this test.

Might someone know of a clean way to have a cronjob run reusing an existing container?

-- Adrien Lemaire
django
google-cloud-platform
kubectl
kubernetes
kubernetes-cronjob

0 Answers