Kubernetes node.js container cannot connect to MongoDB Atlas

5/31/2019

So I've been struggling with this all afternoon. I can't at all get my NodeJS application running on kubernetes to connect to my MongoDB Atlas database.

In my application I've tried running mongoose.connect('mongodb+srv://admin:<password>@<project>.gcp.mongodb.net/project_prod?retryWrites=true&w=majority', { useNewUrlParser: true })

but I simply get the error

UnhandledPromiseRejectionWarning: Error: querySrv ETIMEOUT _mongodb._tcp.<project>.gcp.mongodb.net
    at QueryReqWrap.onresolve [as oncomplete] (dns.js:196:19)
(node:32) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 17)

I've tried setting up a ExternalName service too, but using either the URL or the ExternalName results in me not being able to connect to the database.

I've whitelisted my IP on MongoDB Atlas, so I know that isn't the issue.

It also seems to work on my local machine, but not in the kubernetes pod. What am I doing wrong?

-- Matthew Weeks
kubernetes
mongodb
mongodb-atlas
mongoose
node.js

2 Answers

6/1/2019

I use MongoDB Atlas from Kubernetes but on AWS. Although for testing purposes you can enable all IP addresses and test, here is the approach for a production setup:

  • MongoDB Atlas supports Network Peering
  • Under Network Access > New Peering Connection
  • In the case of AWS, VPC ID, CIDR and Region have to be specified. For GCP it should be the standard procedure used for VPC peering.
-- Kevin Prasanna R R
Source: StackOverflow

6/3/2019

I figured out the issue, my pod DNS was not configured to allow external connections, so I set dnsPolicy: Default in my YML, because oddly enough Default is not actually the default value

-- Matthew Weeks
Source: StackOverflow