How to create CloudSQL Proxy credentials as secrets on GKE

11/23/2018

I've followed the steps at https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine to set up MySQL user accounts and service accounts. I've downloaded the JSON file containing my credentials.

My issue is that in the code I copied from the site:

- name: cloudsql-proxy
  image: gcr.io/cloudsql-docker/gce-proxy:1.11
  command: ["/cloud_sql_proxy",
            "-instances=<INSTANCE_CONNECTION_NAME>=tcp:3306",
            "-credential_file=/secrets/cloudsql/credentials.json"]
  securityContext:
    runAsUser: 2  # non-root user
    allowPrivilegeEscalation: false
  volumeMounts:
    - name: cloudsql-instance-credentials
      mountPath: /secrets/cloudsql
      readOnly: true

the path /secrets/cloudsql/credentials.json is specified and I have no idea where it's coming from.

I think I'm supposed to create the credentials as a secret via

kubectl create secret generic cloudsql-instance-credentials --from-file=k8s\secrets\my-credentials.json

But after that I have no idea what to do. How does this secret become the path /secrets/cloudsql/credentials.json?

-- shalvah
cloud-sql-proxy
google-cloud-sql
google-kubernetes-engine
kubernetes

3 Answers

11/23/2018

you have to add a volume entry under the spec like so:

  volumes:
    - name: cloudsql-instance-credentials
      secret:
        defaultMode: 420
        secretName: cloudsql-instance-credentials

Note: This belongs to the deployment spec not the container spec.

-- gries
Source: StackOverflow

11/23/2018

Actually we can mount configmaps or secrets as files in the pod's container runtime. And then in runtime we can use them in whatever case we need. But to do that, we need to properly set up them.

  • create secret/configmap
  • add a volume for the secret in .spec.volumes in the pod (if you deploy the pod using deployment then add volume in .spec.template.spec.volumes)
  • mount the created volume in .spec.container[].volumemount

Ref: official kubernetes doc

There is a sample for your use case:

  - name: cloudsql-proxy
    image: gcr.io/cloudsql-docker/gce-proxy:1.11
    command: ["/cloud_sql_proxy",
              "-instances=<INSTANCE_CONNECTION_NAME>=tcp:3306",
              "-credential_file=/secrets/cloudsql/credentials.json"]
    securityContext:
      runAsUser: 2  # non-root user
      allowPrivilegeEscalation: false
    volumeMounts:
      - name: cloudsql-instance-credentials
        mountPath: /secrets/cloudsql
        readOnly: true
volumes:
- name: cloudsql-instance-credentials
  secret:
    defaultMode: 511
    secretName: cloudsql-instance-credentials
-- Shudipta Sharma
Source: StackOverflow

5/14/2019

The current answers are good, but I wanted to provide a more complete example. This came verbatim from some of the old google docs from two years ago (which no longer exist). Replace the @@PROECT@@ and @@DBINST@@ with your own values.

The volumes loads a secret, then volumeMounts makes it visible to the postgres-proxy container at /secrets/cloudsql

    spec:
      volumes:
      - name: cloudsql-oauth-credentials
        secret:
          secretName: cloudsql-oauth-credentials
      - name: cloudsql
        emptyDir:
      containers:
      - name: postgres-proxy
        image: gcr.io/cloudsql-docker/gce-proxy:1.09
        imagePullPolicy: Always
        command: ["/cloud_sql_proxy",
                  "--dir=/cloudsql",
                  "-instances=@@PROJECT@@:us-central1:@@DBINST@@=tcp:5432",
                  "-credential_file=/secrets/cloudsql/credentials.json"]
        volumeMounts:
          - name: cloudsql-oauth-credentials
            mountPath: /secrets/cloudsql
            readOnly: true
          - name: cloudsql
            mountPath: /cloudsql
-- Charles Thayer
Source: StackOverflow