I'm having troubles establishing a SSL connection between a web service and a remotely hosted Postgres database. With the same cert and key files being used for the web service, I can connect to the database with tools such as pgAdmin and DataGrip. These files were downloaded from Postgres instance in the Google Cloud Console.
Issue:
At the time of Spring Boot service start up, the following error occurs:
org.postgresql.util.PSQLException: Could not read SSL key file /tls/tls.key
Where I look at the Postgres server logs, the error is recorded as
LOG: could not accept SSL connection: UNEXPECTED_RECORD
Setup:
Spring Boot service running on Minikube (local) and GKE connecting to a Google Cloud SQL Postgres instance.
Actions Taken:
I have downloaded the client cert & key. I created a K8s TLS Secret using the downloaded client cert & key. I also have made sure the files can be read from the volume mount by running the following command on the k8s deployment config:
command: ["bin/sh", "-c", "cat /tls/tls.key"]
Here is the datasource url which is fed in via an environment variable (DATASOURCE).
"jdbc:postgresql://[Database-Address]:5432/[database]?ssl=true&sslmode=require&sslcert=/tls/tls.crt&sslkey=/tls/tls.key"
Here is the k8s deployment yaml, any idea where i'm going wrong?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "service.name" . }}
labels:
release: {{ template "release.name" . }}
chart: {{ template "chart.name" . }}
chart-version: {{ template "chart.version" . }}
release: {{ template "service.fullname" . }}
spec:
replicas: {{ $.Values.image.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: {{ template "service.name" . }}
release: {{ template "release.name" . }}
env: {{ $.Values.environment }}
spec:
imagePullSecrets:
- name: {{ $.Values.image.pullSecretsName }}
containers:
- name: {{ template "service.name" . }}
image: {{ $.Values.image.repo }}:{{ $.Values.image.tag }}
# command: ["bin/sh", "-c", "cat /tls/tls.key"]
imagePullPolicy: {{ $.Values.image.pullPolicy }}
volumeMounts:
- name: tls-cert
mountPath: "/tls"
readOnly: true
ports:
- containerPort: 80
env:
- name: DATASOURCE_URL
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_URL
- name: DATASOURCE_USER
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_USER
- name: DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_PASSWORD
volumes:
- name: tls-cert
projected:
sources:
- secret:
name: postgres-tls
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
So I figured it out, I was asking the wrong question!
Google Cloud SQL has a proxy component for the Postgres database. Therefore, trying to connect the traditional way (the problem I was trying to solve) has been resolved by implementing proxy. Instead of dealing with whitelisting IPs, SSL certs, and such, you just spin up the proxy, point it at a GCP credential file, then updated your database uri to access via localhost.
To set up the proxy, you can find directions here. There is a good example of a k8s deployment file here.
One situation I did come across was the GCP service account. Make sure to add Cloud SQL Client AND Cloud SQL Editor roles. I only added the Cloud SQL Client to start with and kept getting the 403 error.