Kubernetes: initContainer with gce-proxy?

4/13/2017

I need to update my database schema before running our app. For that based on this thread and on this answer I've decided to use init container to do the job.

Since my SQL instance is a hosted Google Cloud SQL instance, I need gce-proxy to be able to connect to the database. My initContainers looks like this:

 initContainers:
    - name: cloudsql-proxy-init
      image: gcr.io/cloudsql-docker/gce-proxy:1.09
      command: ["/cloud_sql_proxy"]
      args:
        - --dir=/cloudsql
        - -instances=xxxx:europe-west1:yyyyy=tcp:5432
        - -credential_file=/secrets/cloudsql/credentials.json
      volumeMounts:
        - name: dev-db-instance-credentials
          mountPath: /secrets/cloudsql
          readOnly: true
        - name: ssl-certs
          mountPath: /etc/ssl/certs
        - name: cloudsql
          mountPath: /cloudsql
    - name: liquibase
      image: eu.gcr.io/xxxxx/liquibase:v1
      imagePullPolicy: Always
      command: ["./liquibase.sh"]
      env:
        - name: DB_TYPE
          value: postgresql
        - name: DB_URL
          value: jdbc:postgresql://localhost/test
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: password
        - name: DB_USER
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: username

But my pod is stuck:

containers with incomplete status: [cloudsql-proxy-init liquibase]

If I look at pod describe:

Init Containers:
  cloudsql-proxy-init:
    Container ID:   docker://0373fa6528ec3768d46a1c59ca45f12d9fc46d1f0d199b7eb3772545701e1b1d
    Image:      gcr.io/cloudsql-docker/gce-proxy:1.09
    Image ID:       docker://sha256:66c58ef63dbfe239ff95416d62635559498ebb395abb8a4b1edee78e48e05fe4
    Port:
    Command:
      /cloud_sql_proxy
    Args:
      --dir=/cloudsql
      -instances=xxxxx:europe-west1:yyyyyy=tcp:5432
      -credential_file=/secrets/cloudsql/credentials.json
    State:      Running
      Started:      Thu, 13 Apr 2017 17:40:02 +0300
    Ready:      False
    Restart Count:  0
    Mounts:
      /cloudsql from cloudsql (rw)
      /etc/ssl/certs from ssl-certs (rw)
      /secrets/cloudsql from dev-db-instance-credentials (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-th58c (ro)
 liquibase:
    Container ID:
    Image:      eu.gcr.io/xxxxxx/liquibase:v1
    Image ID:
    Port:
    Command:
      ./liquibase.sh
    State:      Waiting
      Reason:       PodInitializing
    Ready:      False
    Restart Count:  0
    Environment:
      DB_TYPE:      postgresql
      DB_URL:       jdbc:postgresql://localhost/test
      DB_PASSWORD:  <set to the key 'password' in secret 'db-credentials'>  Optional: false
      DB_USER:      <set to the key 'username' in secret 'db-credentials'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-th58c (ro)

And it seems that cloud-sql-proxy-init is running:

2017/04/13 14:40:02 using credential file for authentication; email=yyyyy@xxxxxx.iam.gserviceaccount.com
2017/04/13 14:40:02 Listening on 127.0.0.1:5432 for xxxxx:europe-west1:yyyyy
2017/04/13 14:40:02 Ready for new connections

Which is probably the problem, because init container should exit so that initialization could continue? So how can I connect from liquibase to the Google Cloud SQL instance?

-- gerasalus
google-cloud-sql
google-kubernetes-engine
kubernetes

2 Answers

4/13/2017

You are expecting that the init containers are all running next to each other like the normal containers in a pod.

But unfortunately for you the init containers are started one after the other as the previous one is finished. See https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#understanding-init-containers

Init Containers are exactly like regular Containers, except:

  • They always run to completion.
  • Each one must complete successfully before the next one is started.

So you won't be able to run the proxy container alongside with your app container.

A solution would be to build a container that has both binaries in it and then use a shell script to background the proxy and run your application to completion.

-- Janos Lenart
Source: StackOverflow

4/14/2017

You're using init containers which requires running to completion. Cloud SQL proxy needs to be running at all times while you're querying the database. For that, the recommended way to run it is to have a second container and run it as a sidecar container in your pod.

You can find an example here: https://github.com/GoogleCloudPlatform/container-engine-samples/tree/master/cloudsql

-- AhmetB - Google
Source: StackOverflow