API Pod failed to connect to mongo pod in the same kubernetes cluster

7/22/2017

I was running a nodejs api as a pod on a google compute engine managed by kubernetes. The API was connecting to database well but then it failed suddenly showing this error:

listening to http server on 0.0.0.0:8080...

events.js:160
      throw er; // Unhandled 'error' event
      ^
MongoError: failed to connect to server [mongo:27017] on first connect [MongoError: getaddrinfo ENOTFOUND mongo mongo:27017]
    at Pool.<anonymous> (/usr/src/app/node_modules/mongodb-core/lib/topologies/server.js:328:35)
    at emitOne (events.js:96:13)
    at Pool.emit (events.js:188:7)
    at Connection.<anonymous> (/usr/src/app/node_modules/mongodb-core/lib/connection/pool.js:274:12)
    at Connection.g (events.js:292:16)
    at emitTwo (events.js:106:13)
    at Connection.emit (events.js:191:7)
    at Socket.<anonymous> (/usr/src/app/node_modules/mongodb-core/lib/connection/connection.js:177:49)
    at Socket.g (events.js:292:16)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:188:7)
    at connectErrorNT (net.js:1021:8)
    at _combinedTickCallback (internal/process/next_tick.js:80:11)
    at process._tickDomainCallback (internal/process/next_tick.js:128:9)

I tried to restart the prods, to delete and re-create containers but no success?

this is how I connect to database:

mongoose.connect(process.env.MONGO_DEV_URL || process.env.MONGODB_URI || 'mongodb://mongo:27017:yaxiDb/yaxiDb', { useMongoClient: true });

How can I debug this, where could be the problem?

-- bigOther
dns
google-compute-engine
kubernetes
mongodb
node.js

1 Answer

7/22/2017

I just found the solution, the order of creating services matters. I've deleted all services that use that database and re-create them:

kubectl delete -f containers/backend/
kubectl create -f containers/backend/
-- bigOther
Source: StackOverflow