I am having troubles connecting a Google Cloud Platform Kubernetes pod to an external MySQL running on AWS.
Here's my deployment file (some sensitive parts replaced by ***
):
apiVersion: apps/v1
kind: Deployment
metadata:
name: watches-v1
spec:
replicas: 3
selector:
matchLabels:
app: watches-v1
template:
metadata:
labels:
app: watches-v1
spec:
containers:
- name: watches-v1
image: silasberger/watches:1.0
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: MYSQL_HOST
value: "***.eu-west-1.rds.amazonaws.com"
- name: MYSQL_DB
value: "***"
- name: MYSQL_USER
value: "***"
- name: MYSQL_PASS
value: "***"
- name: API_USER
value: "***"
- name: API_PASS
value: "***"
This is the Dockerfile which I build and push to Dockerhub as watches:1.0
:
FROM node:8
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
ENV MICROSERVICE="watches"
ENV WATCHES_API_VERSION="1"
CMD [ "npm", "start" ]
The following things work:
mysql
commandHowever, as soon as I apply the deployment in my Kubernetes cluster, the pods aren't able to connect to the AWS DB. The application starts, I can access the swagger page, but when I run the kubectl logs <pod-name>
command, I always get this error:
Unable to connect to the database: { SequelizeConnectionError: connect ETIMEDOUT
at Utils.Promise.tap.then.catch.err (/usr/src/app/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:149:19)
at tryCatcher (/usr/src/app/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/usr/src/app/node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (/usr/src/app/node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromise0 (/usr/src/app/node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (/usr/src/app/node_modules/bluebird/js/release/promise.js:690:18)
at _drainQueueStep (/usr/src/app/node_modules/bluebird/js/release/async.js:138:12)
at _drainQueue (/usr/src/app/node_modules/bluebird/js/release/async.js:131:9)
at Async._drainQueues (/usr/src/app/node_modules/bluebird/js/release/async.js:147:5)
at Immediate.Async.drainQueues (/usr/src/app/node_modules/bluebird/js/release/async.js:17:14)
at runCallback (timers.js:810:20)
at tryOnImmediate (timers.js:768:5)
at processImmediate [as _immediateCallback] (timers.js:745:5)
name: 'SequelizeConnectionError',
parent:
{ Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/usr/src/app/node_modules/mysql2/lib/connection.js:192:13)
at ontimeout (timers.js:498:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:290:5)
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true },
original:
{ Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/usr/src/app/node_modules/mysql2/lib/connection.js:192:13)
at ontimeout (timers.js:498:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:290:5)
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true } }
It chooses the correct host, DB name and credentials (as indicated by a previous part of the log not shown here), but it apparently can't connect to it. As you can see, the application is written in Node.js and uses Sequelize.
All the research I have done so far pointed to a firewall issue, so I set the following VPC rule on the Google Cloud Platform for that project:
$ gcloud compute firewall-rules describe allow-all-outbound
allowed:
- IPProtocol: all
creationTimestamp: '2018-11-14T02:51:20.808-08:00'
description: Allow all inbound connections
destinationRanges:
- 0.0.0.0/0
direction: EGRESS
disabled: false
id: '7178441953737326791'
kind: compute#firewall
name: allow-mysql-outbound
network: https://www.googleapis.com/compute/v1/projects/adept-vine-222109/global/networks/default
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/adept-vine-222109/global/firewalls/allow-mysql-outbound
Since this didn't change anything, I also tried adding the same rule again, with direction INGRESS
, but that didn't work either (as I expected).
I am totally new to the Google Cloud Platform and to Kubernetes, so maybe this is just a dumb mistake, but I'm really out of ideas on how to get it to work.
As it turns out, the problem was on the AWS side. Thanks Jacob Tomlinson for the suggestion.
While Public Accessibility was activated for the AWS MySQL instance, it apparently didn't allow access from all sources. I'm not sure why it worked from my local machine, but anyway.
I was able to solve it by adding a security group in AWS that allows inbound traffic on all ports and with all protocols for the source 0.0.0.0/0. I then associated this security group with my MySQL instance (go to the instance, click Modify, go to Network & Security settings, choose the newly created group, save changes). I will still need to tweak this rule from a security perspective, but at least it all works now.