I have a Kubernetes project with several applications running in pods, PostgresSQL DB running in Google CloudSQL. Following this manual I've made all things and stopped on the Step 6. I've crated Deployment configuration for proxy rules, deployed it to the Kubernetes project, but this pod doesn't start. I can't find where I've went wrong.
Here is my configuration:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: postgres-proxy labels: app: postgres-proxy spec: template: metadata: labels: app: postgres-proxy spec: containers: - name: app image: postgres-rules ports: - containerPort: 80 # The following environment variables will contain the database host, # user and password to connect to the PostgreSQL instance. env: - name: POSTGRES_DB_HOST value: 127.0.0.1:5432 # [START cloudsql_secrets] - name: POSTGRES_DB_USER valueFrom: secretKeyRef: name: cloudsql-db-credentials key: username - name: POSTGRES_DB_PASSWORD valueFrom: secretKeyRef: name: cloudsql-db-credentials key: password # [END cloudsql_secrets] # Change <INSTANCE_CONNECTION_NAME> here to include your GCP # project, the region of your Cloud SQL instance and the name # of your Cloud SQL instance. The format is # $PROJECT:$REGION:$INSTANCE # [START proxy_container] - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=pr-business-kubernetes:us-west1:postgresql-data1=tcp:5432", "-credential_file=/secrets/cloudsql/credentials.json"] volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true # [END proxy_container] # [START volumes] volumes: - name: cloudsql-instance-credentials secret: secretName: cloudsql-instance-credentials # [END volumes]
And all that I see in the end is:
Any help, please)
For anyone that comes across a similar issue, It should be ensured that the secret for the proxy pod has been mounted correctly.
The application pod relies on the cloudsql-proxy pod to be running for it to be able to start.
If both pods cannot start the source of the issue is likely to be with the cloudsql-proxy pod. A describe command on the cloudsql-proxy pod may provide more clues into the issue than a describe command on the application pod (although a describe command on both is recommended):
kubectl describe pod app kubectl describe pod cloudsql-proxy
As the cloud-proxy pod uses a mounted secret, one reason it may not start is if there is an issue with this mount, for example if there was an issue when creating the secret. The describe command should generate output informing of this if this is a the case. A
kubectl get secrets can be used to validate whether the secret the pod is trying to mount does in fact exist. If not, it can be created.