I followed an official documentation on how to deploy Microsoft SqlServer in Kubernetes as a container on Azure Kubernetes Service. There is a couple things I'm noticing and it's concerning:
When I execute kubectl get pods
shows 2 instances of mssql with one Running
and one Pending
. Even if I remove all the pods with kubectl delete pods -l app=mssql
, Kubernetes would recreate both pods.
As I use my application at some point I start getting errors:
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
And
A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 35 - An internal exception was caught) ---> System.IO.IOException: Unable to read data from the transport connection: Connection reset by peer.
I haven't done anything different from the official documentation besides adding the container limits. I believe the container doesn't even hit those limits. I had Kubernetes running on 2x 4GB VMs and as I was adding SqlServer I added 3rd 4GB VM So that Kubernetes has enough CPU and Memory to work with.
resources:
requests:
cpu: 300m
memory: 2.5Gi
limits:
cpu: 400m
memory: 3Gi
The last but not least if I run kubectl logs mssql-xyz
sometimes I see the following in the logs:
2020-10-28 13:24:16.83 Logon Login failed for user 'sa'. Reason: Password did not match that for the login provided. CLIENT: 10.240.0.4 2020-10-28 13:24:18.09 Logon Error: 18456, Severity: 14, State: 8.
Obviously the password is correct in my application. It may be related to other errors, I don't know...
Any thoughts on all this? How can I make it work stable without any errors?