CloudSQL docker container fails to connect to sql instance for a while then connects in k8s

7/2/2019

I am trying to connect to my GCP Sql instance using a docker cloud-sql container. I also have a service that depends on cloudsql to connect to the database in GKE. Together, these two create a pod.

The connection always seems to fail about 3 or 4 times and then connects successfully. this happens every time want to deploy a new version of my service by creating an updated kubernetes pod.

My kubernetes deployment that describe this code is:

...

- name: cloudsql-proxy
  image: gcr.io/cloudsql-docker/gce-proxy:1.12
  command:
    - /cloud_sql_proxy
    - -instances=my-project:europe-west1:my-instance=tcp:5432
    - -credential_file=/secrets/cloudsql/credentials.json

...

I expect cloud-sql to connect immediately to my instance but I get this error a couple of times on my logs

couldn't connect to "my-project:europe-west1:my-instance": Post https://www.googleapis.com/sql/v1beta4/projects/my-project/instances/my-instance/createEphemeral?alt=json&prettyPrint=false: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: net/http: TLS handshake timeout
-- Magondu
docker
google-cloud-platform
google-cloud-sql
google-kubernetes-engine

3 Answers

7/2/2019

The OAuth require using refresh tokens to acquire new access tokens as they have limited lifetimes to enhance security. A refresh token will allow your application access Cloud SQL. Please create a new token, automatically invalidates the oldest token.

-- guillermo rojas
Source: StackOverflow

7/3/2019

Add option --dir=/cloudsql after - /cloud_sql_proxy

  - name: cloudsql-proxy
    image: gcr.io/cloudsql-docker/gce-proxy:1.14
    command: ["/cloud_sql_proxy",
                "--dir=/cloudsql",
                "-instances=my-project:europe-west1:my-instance=tcp:3306",
                # If running on a VPC, the Cloud SQL proxy can connect via Private IP. See:
                # https://cloud.google.com/sql/docs/mysql/private-ip for more info.
                # "-ip_address_types=PRIVATE",
                "-credential_file=/secrets/cloudsql/credentials.json"]
    # [START cloudsql_security_context]
    securityContext:
      runAsUser: 2  # non-root user
      allowPrivilegeEscalation: false
    # [END cloudsql_security_context]
    volumeMounts:
    - name: cloudsql-instance-credentials
      mountPath: /secrets/cloudsql
      readOnly: true
    - name: cloudsql
      mountPath: /cloudsql
  # [END proxy_container]
  # [START volumes]
  volumes:
  - name: cloudsql-instance-credentials
    secret:
      secretName: cloudsql-instance-credentials
  - name: cloudsql
    emptyDir:
  # [END volumes]
-- Le Khiem
Source: StackOverflow

7/5/2019

This error indicates an overloaded pod or a slow network startup time. Since you are connecting to a Google OAuth endpoint, I will rule out the other side being the problem.

If you have the cluster or a pod is overloaded (hitting memory limits, CPU at 100%, etc) network response can start failing.

-- John Hanley
Source: StackOverflow