MongoDB not able to connect to Node JS using env variables (kubernetes)

5/21/2019

I have a container based application running node JS and my backend is a mongoDB container.

Basically, what I am planning to do is to run this in kubernetes.

I have deployed this as separate containers on my current environment and it works fine. I have a mongoDB container and a node JS container.

To connect the two I would do

docker run -d --link=mongodb:mongodb -e MONGODB_URL='mongodb://mongodb:27017/user' -p 4000:4000 e922a127d049 

my connection.js runs as below where it would take the MONGODB_URL and pass into the process.env in my node JS container. My connection.js would then extract the MONGODB_URL into the mongoDbUrl as show below.

const mongoClient = require('mongodb').MongoClient;
const mongoDbUrl = process.env.MONGODB_URL;
//console.log(process.env.MONGODB_URL)
let mongodb;

function connect(callback){
    mongoClient.connect(mongoDbUrl, (err, db) => {
        mongodb = db;
        callback();
    });
}
function get(){
    return mongodb;
}

function close(){
    mongodb.close();
}

module.exports = {
    connect,
    get,
    close
};

To deploy on k8s, I have written a yaml file for

1) web controller 2) web service 3) mongoDB controller 4) mongoDB service

This is my current mongoDB controller

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mongo-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: mongo
    spec:
      containers:
      - image: mongo:latest
        name: mongo
        ports:
        - name: mongo
          containerPort: 27017
          hostPort: 27017

my mongoDB service

apiVersion: v1
kind: Service
metadata:
  labels:
    name: mongodb
  name: mongodb
spec:
  ports:
    - port: 27017
      targetPort: 27017
  selector:
    name: mongo

my web controller

apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: web
  name: web-controller
spec:
  replicas: 1
  selector:
    name: web
  template:
    metadata:
      labels:
        name: web
    spec:
      containers:
      - image: leexha/node_demo:21
        env:
        - name: MONGODB_URL
          value: "mongodb://mongodb:27017/user"
        name: web
        ports:
        - containerPort: 4000
          name: node-server

and my web service

apiVersion: v1
kind: Service
metadata:
  name: web
  labels:
    name: web
spec:
  type: NodePort
  ports:
    - port: 4000
      targetPort: 4000
      protocol: TCP
  selector:
    name: web

I was able to deploy all the services and pods on my local kubernetes cluster.

However, when I tried to access the web application over a nodeport, it tells me that there is a connection error to my mongoDB.

TypeError: Cannot read property 'collection' of null
    at /app/app.js:24:17
    at Layer.handle [as handle_request] 

This is my node JS code for app.js

var bodyParser = require('body-parser')
, MongoClient = require('mongodb').MongoClient
, PORT = 4000
, instantMongoCrud = require('express-mongo-crud') // require the module
, express = require('express')
, app = express()
, path = require('path')
, options = { //specify options
    host: `localhost:${PORT}`
}
, db = require('./connection')


// connection to database
db.connect(() => {

    app.use(bodyParser.json()); // add body parser
    app.use(bodyParser.urlencoded({ extended: true }));
    //console.log('Hello ' + process.env.MONGODB_URL)

    // get function 
    app.get('/', function(req, res) {
        db.get().collection('users').find({}).toArray(function(err, data){
            if (err)
                console.log(err)
            else
                res.render('../views/pages/index.ejs',{data:data});
        });
    });

Clearly, this is an error when my node JS application is unable to read the mongoDB service.

I at first thought my MONGODB_URL was not set in my container. However, when I checked the nodeJS container using

kubectl exec -it web-controller-r269f /bin/bash

and echo my MONGODB_URL it returned me back mongodb://mongodb:27017/user which is correct.

Im quite unsure what I am doing wrong as I am pretty sure I have done everything in order and my web deployment is communicating to mongoDB service. Any help? Sorry am still learning kubernetes and please pardon any mistakes

-- adr
kubernetes

1 Answer

5/21/2019

[Edit]

Sorry my bad, the connections string mongodb://mongodb:27017 would actually work. I tried dns querying that name, and it was able to resolve to the correct ip address even without specifying ".default.svc...".

root@web-controller-mlplb:/app# host mongodb mongodb.default.svc.cluster.local has address 10.108.119.125

@Anshul Jindal is correct that you have race condition, where the web pods are being loaded first before the database pods. You were probably doing kubectl apply -f . Try doing a reset kubectl delete -f . in the folder containing those yaml . Then kubectl apply the database manifests first, then after a few seconds, kubectl apply the web manifests. You could also probably use Init Containers to check when the mongo service is ready, before running the pods. Or, you can also do that check in your node.js application.

Example of waiting for mongodb service in Node.js

In your connection.js file, you can change the connect function such that if it fails the first time (i.e due to mongodb service/pod not being available yet), it will retry again every 3 seconds until a connection can be established. This way, you don't even have to worry about load order of applying kubernetes manifests, you can just kubectl apply -f .

let RECONNECT_INTERVAL = 3000

function connect(callback){
      mongoClient.connect(mongoDbUrl, (err, db) => {
        if (err) {
          console.log("attempting to reconnect to " + mongoDbUrl)
          setTimeout(connect.bind(this, callback), RECONNECT_INTERVAL)
          return
        } else {
          mongodb = db;
          callback();
        }
      });
}
-- redgetan
Source: StackOverflow