Cannot connect to Google MySQL from deployed Kubernetes NodeJS app

10/30/2018

I have been trying for the past couple of days to get my deployed NodeJS Kubernetes LoadBalancer app to connect to a Google Cloud MySQL instance. the SQL database and the Kubernetes deployment exist in the same Google project. Both The ORM of choice for this project is Sequelize. Here is a snippet of my connection configuration:

"deployConfigs": {
   "username": DB_USERNAME,
   "password": DB_PASSWORD,
   "database": DB_DATABASE,
   "host": DB_HOST,
   "port": 3306,
   "dialect": "mysql",
   "socketPath": "/cloudsql/INSTANCE_NAME"
}

When I run the application locally with the same configurations, I am able to query from the database. I can also hit the NodeJS LoadBalancer URL to get a valid API response as long as the API does not hit the database.

I have whitelisted my IP as well as the IP for the NodeJS LoadBalancer API but I still get the following response:

{
"name": "SequelizeConnectionError",
"parent": {
    "errorno": "ETIMEDOUT",
    "code": "ETIMEDOUT",
    "syscall": "connect",
    "fatal": true
  },
"original": {
    "errorno": "ETIMEDOUT",
    "code": "ETIMEDOUT",
    "syscall": "connect",
    "fatal": true
  }
}

I followed the instructions for creating a Proxy through a Kubernetes deployment but I don't think that will necessarily solve my issue because I simply want to connect from my Kubernetes app to a persistent database.

Again, I have been able to successfully hit the remote DB when running the container locally and when running the node app locally. I am really unsure as to why this will not connect when deployed.

Thanks!

-- user2844780
gcloud
kubernetes
mysql
node.js
sequelize.js

2 Answers

10/30/2018

So Kubernetes does a lot Source NATing so I had to add a rule like this on my network to allow outgoing traffic everywhere from my cluster in GCE:

egress rule

This is very permissive so you might just want to add it for testing purposes initially. You can also check connectivity to MySQL by shelling into a running pod:

$ kubectl exec -it <running-pod> sh
/home/user # telnet $DB_HOST 3306
-- Rico
Source: StackOverflow

1/21/2019

It sounds like you might be attempting to connect to your Cloud SQL instance via its public IP? If that's the case, then be careful as that is not supported. Take a look at this documentation page to figure out what's the best way to go about it.

You mentioned you're already using a proxy, but didn't mention which one. If it's the Cloud SQL Proxy, then it should allow you to perform any kind of operation you want against your database, all it does is establish a connection between a client (i.e. a pod) and the Cloud SQL instance. This Proxy should work without any issues.

Don't forget to setup the appropriate grants and all of that stuff on the Cloud SQL side of things.

-- Lopson
Source: StackOverflow