I'm using the MyHealthClinic app (https://azuredevopslabs.com/labs/vstsextend/kubernetes/) which is a .NET Core frontend and backend Kubernetes cluster, and deploying to Google Kubernetes Engine trying to connect to a SQL Server VM but receive the following error with CrashLoopBackOff when the pod tries to start up after pulling the image that I pushed:
Unhandled Exception: System.Data.SqlClient.SqlException: A connection was successfully
established with the server, but then an error occurred during the pre-login handshake.
(provider: TCP Provider, error: 35 - An internal exception was caught) --->
System.Security.Authentication.AuthenticationException: The remote certificate is invalid
according to the validation procedure. at
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Net.Security.SslState.StartSendAuthResetSignal(ProtocolToken ...
I've checked my appsettings.json and it seems correct in that I have it set as:
"DefaultConnection": "Server={my-external-IP},1433;Initial Catalog=mhcdb;Persist Security Info=False;User ID={sqlusername};Password={sqlpassword};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
I've also confirmed:
Is there anywhere else I can check to try and fix this? I'm able to deploy the cluster without issues to AKS in Azure but not sure if GKE may be blocking outbound connections from the clusters. The only similar questions I've found are related to SMTP servers so far. I'm a bit new to GKE so any ideas will help.
If it helps, here's my deployment YAML file (kept the same for the AKS cluster so not sure if there needs to be something changed specifically for GKE):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mhc-back
spec:
replicas: 1
template:
metadata:
labels:
app: mhc-back
spec:
containers:
- name: mhc-back
image: redis
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: mhc-back
spec:
ports:
- port: 6379
selector:
app: mhc-back
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mhc-front
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: mhc-front
spec:
containers:
- name: mhc-front
image: {gcr.io/[Project-Id]}/myhealth.web:latest
imagePullPolicy: Always
ports:
- containerPort: 80
resources:
requests:
cpu: 250m
limits:
cpu: 500m
env:
- name: REDIS
value: "mhc-back"
---
apiVersion: v1
kind: Service
metadata:
name: mhc-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: mhc-front
After I started looking into why the remote certificate (SQL) was invalid, I changed my connection string to include TrustServerCertificate=True. Since this is a demo environment and I kept Encrypt=True, then it looks like this fixed everything! If anyone thinks bypassing walking the server certs is a bad idea, let me know too.
"DefaultConnection": "Server={my-external-IP},1433;Initial Catalog=mhcdb;Persist Security Info=False;User ID={sqlusername};Password={sqlpassword};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;"