The remote certificate is invalid according to the validation procedure for SqlClient

9/19/2019

I'm using the MyHealthClinic app (https://azuredevopslabs.com/labs/vstsextend/kubernetes/) which is a .NET Core frontend and backend Kubernetes cluster, and deploying to Google Kubernetes Engine trying to connect to a SQL Server VM but receive the following error with CrashLoopBackOff when the pod tries to start up after pulling the image that I pushed:

Unhandled Exception: System.Data.SqlClient.SqlException: A connection was successfully 
established with the server, but then an error occurred during the pre-login handshake. 
(provider: TCP Provider, error: 35 - An internal exception was caught) ---> 
System.Security.Authentication.AuthenticationException: The remote certificate is invalid 
according to the validation procedure. at 
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Net.Security.SslState.StartSendAuthResetSignal(ProtocolToken ...

I've checked my appsettings.json and it seems correct in that I have it set as:

"DefaultConnection": "Server={my-external-IP},1433;Initial Catalog=mhcdb;Persist Security Info=False;User ID={sqlusername};Password={sqlpassword};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"

I've also confirmed:

  • SQL VM is accessible from all external IPs I have for the front-end cluster and my local machine
  • Firewall on the machine and for the VPC network has port 1433 allowed
  • I can connect successfully from my local machine using the same IP for the SQL VM and creds
  • IP for connection string is specified without http/https

Is there anywhere else I can check to try and fix this? I'm able to deploy the cluster without issues to AKS in Azure but not sure if GKE may be blocking outbound connections from the clusters. The only similar questions I've found are related to SMTP servers so far. I'm a bit new to GKE so any ideas will help.

If it helps, here's my deployment YAML file (kept the same for the AKS cluster so not sure if there needs to be something changed specifically for GKE):

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mhc-back
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mhc-back
    spec:
      containers:
      - name: mhc-back
        image: redis
        ports:
        - containerPort: 6379
          name: redis

---

apiVersion: v1
kind: Service
metadata:
  name: mhc-back
spec:
  ports:
  - port: 6379
  selector:
    app: mhc-back

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mhc-front
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5 
  template:
    metadata:
      labels:
        app: mhc-front
    spec:
      containers:
      - name: mhc-front
        image: {gcr.io/[Project-Id]}/myhealth.web:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: 250m
          limits:
            cpu: 500m
        env:
        - name: REDIS
          value: "mhc-back"

---

apiVersion: v1
kind: Service
metadata:
  name: mhc-front
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: mhc-front
-- m00nbeam360.0
.net-core
google-kubernetes-engine
kubernetes

1 Answer

9/19/2019

After I started looking into why the remote certificate (SQL) was invalid, I changed my connection string to include TrustServerCertificate=True. Since this is a demo environment and I kept Encrypt=True, then it looks like this fixed everything! If anyone thinks bypassing walking the server certs is a bad idea, let me know too.

"DefaultConnection": "Server={my-external-IP},1433;Initial Catalog=mhcdb;Persist Security Info=False;User ID={sqlusername};Password={sqlpassword};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;"
-- m00nbeam360.0
Source: StackOverflow