Google Cloud, Kubernetes and Cloud SQL proxy: default Compute Engine service account issue

11/27/2019

I have Google Cloud projects A, B, C, D. They all use similar setup for Kubernetes cluster and deployment. Projects A,B and C have been build months ago. They all use Google Cloud SQL proxy to connect to Google Cloud SQL service. Now when recently I started setting up the Kubernetes for project D, I get following error visible in the Stackdriver logging:

the default Compute Engine service account is not configured with sufficient permissions to access the Cloud SQL API from this VM. Please create a new VM with Cloud SQL access (scope) enabled under "Identity and API access". Alternatively, create a new "service account key" and specify it using the -credential_file parameter

I have compared the difference between the Kubernetes cluster between A,B,C and D but they appear to be same.

Here is the deployment I am using

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: my-site
  labels:
    system: projectA
spec:
  selector:
    matchLabels:
      system: projectA
  template:
    metadata:
      labels:
        system: projectA
    spec:
      containers:
        - name: web
          image: gcr.io/customerA/projectA:alpha1
          ports:
            - containerPort: 80
          env:
            - name: DB_HOST
              value: 127.0.0.1:3306
            # These secrets are required to start the pod.
            # [START cloudsql_secrets]
            - name: DB_USER
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
                  key: username
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
                  key: password
          # [END cloudsql_secrets]
        # Change <INSTANCE_CONNECTION_NAME> here to include your GCP
        # project, the region of your Cloud SQL instance and the name
        # of your Cloud SQL instance. The format is
        # $PROJECT:$REGION:$INSTANCE
        # [START proxy_container]
        - name: cloudsql-proxy
          image: gcr.io/cloudsql-docker/gce-proxy:1.11
          command:
            - sh
            - -c
            - /cloud_sql_proxy -instances=my-gcloud-project:europe-west1:databaseName=tcp:3306
            - -credential_file=/secrets/cloudsql/credentials.json
          # [START cloudsql_security_context]
          securityContext:
            runAsUser: 2  # non-root user
            allowPrivilegeEscalation: false
          # [END cloudsql_security_context]
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
      # [END proxy_container]
      # [START volumes]
      volumes:
        - name: cloudsql-instance-credentials
          secret:
            secretName: cloudsql-instance-credentials
      # [END volumes]

So it would appear that the default service account doesn't have enough permissions? Google Cloud doesn't allow enabling the Cloud SQL API when creating the cluster via Google Cloud console.

From what I have googled this issue some say that the problem was with the gcr.io/cloudsql-docker/gce-proxy image but I have tried newer versions but the same error still occurs.

-- Camoflame
cloud-sql-proxy
gcloud
google-cloud-platform
kubernetes

1 Answer

11/27/2019

I found solution to this problem and it was setting the service-account argument when creating the cluster. Note that I haven't tested what are the minimum required permissions for the new service account.

Here are the steps:

  • Create new service account, doesn't require API key. Name I used was "super-service"
  • Assign roles Cloud SQL admin, Compute Admin, Kubernetes Engine Admin, Editor to the new service account
  • Use gcloudto create the cluster like this using the new service account
gcloud container clusters create my-cluster \
--zone=europe-west1-c \
--labels=system=projectA \
--num-nodes=3 \
--enable-master-authorized-networks \
--enable-network-policy \
--enable-ip-alias \
--service-account=super-service@project-D.iam.gserviceaccount.com \
--master-authorized-networks <list-of-my-ips>

Then the cluster and the deployment at least was deployed without errors.

-- Camoflame
Source: StackOverflow