Multiple MongoDB Statefulsets in same Kubernetes Cluster

8/21/2019

My goal is to create a StatefulSet in the production namespace and the staging namespace. I am able to create the production StatefulSet however when deploying one to the staging namespace, I receive the error:

failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]

The YAML I am using for the staging setup is as so:

staging-service.yml

apiVersion: v1
kind: Service
metadata:
  name: mongodb-staging
  namespace: staging
  labels:
    app: ethereumdb
    environment: staging
spec:
  ports:
  - name: http
    protocol: TCP
    port: 27017
    targetPort: 27017
  clusterIP: None
  selector:
    role: mongodb
    environment: staging

staging-statefulset.yml

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongodb-staging
  namespace: staging
  labels:
    app: ethereumdb
    environment: staging
  annotations:
        prometheus.io.scrape: "true"
spec:
  serviceName: "mongodb-staging"
  replicas: 1
  template:
    metadata:
      labels:
        role: mongodb
        environment: staging
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: role
                operator: In
                values:
                - mongo
              - key: environment
                operator: In
                values:
                - staging
            topologyKey: "kubernetes.io/hostname"
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo
          command:
            - mongod
            - "--replSet"
            - rs0
            - "--smallfiles"
            - "--noprealloc"
            - "--bind_ip_all"
            - "--wiredTigerCacheSizeGB=0.5"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: "role=mongodb,environment=staging"
            - name: KUBERNETES_MONGO_SERVICE_NAME
              value: "mongodb-staging"
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: fast-storage
      resources:
        requests:
          storage: 1Gi

The production namespace deployment differs only in:

  • --replSet value (rs0 instead of rs1)
  • Use of the name 'production' to describe values

Everything else remains identical in both deployments.

The only thing I can imagine is that it is not possible to run both these deployments on the port 27017 despite being in separate namespaces.

I am stuck as to what is causing the failed to connect to server error described above.

Full log of the error

Error in workloop { MongoError: failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]
    at Pool.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/topologies/server.js:336:35)
    at Pool.emit (events.js:182:13)
    at Connection.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:280:12)
    at Object.onceWrapper (events.js:273:13)
    at Connection.emit (events.js:182:13)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:189:49)
    at Object.onceWrapper (events.js:273:13)
    at Socket.emit (events.js:182:13)
    at emitErrorNT (internal/streams/destroy.js:82:8)
    at emitErrorAndCloseNT (internal/streams/destroy.js:50:3)
  name: 'MongoError',
  message:
   'failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]' }
-- Nick
kubernetes
mongodb

2 Answers

8/22/2019

It seems that the problem is similar to: mongodb-error but still you have two database listen on the same port.

In the context of two mongoDB databases listening on the same port:

The answer differs depending on what OS is being considered. In general though:

  • For TCP, no. You can only have one application listening on the same port at one time. Now if you had 2 network cards, you could have one application listen on the first IP and the second one on the second IP using the same port number.
  • For UDP (Multicasts), multiple applications can subscribe to the same port.

But since Linux Kernel 3.9 and later, support for multiple applications listening to the same port was added using the SO_REUSEPORT option. More information is available at this lwn.net article.

But there is workaround.

Run containers on different port and set up Apache or Nginx. As Apache/Nginx works on port 80, you'll not lose any traffic, as 80 is common port.

I recommend Nginx - I find it much easier to set up reverse proxy with Nginx and it's lighter on resource compared to Apache. For nginx, you need to set up it and learn more about Server Blocks: How To Install Nginx on Ubuntu 16.04. How To Set Up Nginx Server Blocks (Virtual Hosts) on Ubuntu 16.04. In Server Blocks, you need to use proxy_pass, for which you can learn more about on nginx site.

-- MaggieO
Source: StackOverflow

8/21/2019

It seems like the error you are getting is from the mongo-sidecar container in the pod. As for why the mongo container is failing, can you obtain more detailed information? It could be something like a failed PVC.

-- Jamie
Source: StackOverflow