Expose MongoDB on Kubernetes with StatefulSets outside cluster

1/31/2017

I followed the guide in the following link: http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html

and set up a mongo DB replica set on Kubernetes with StatefulSets. So far so good, but how do I expose that static hostnames outside the cluster so that I can access them from a Google instance for example?

If I use the IPs of the nodes it will work fine but those can change anytime (upon pod failure and restart with a different IP etc.)...

Thanks in advance!

-- tzik
google-cloud-platform
kubernetes
mongodb

3 Answers

2/21/2017

It looks like the answer is present in the StatefulSet Basics documentation section Using Stable Network Identities:

The Pods’ ordinals, hostnames, SRV records, and A record names have not changed, but the IP addresses associated with the Pods may have changed. In the cluster used for this tutorial, they have. This is why it is important not to configure other applications to connect to Pods in a StatefulSet by IP address.

If you need to find and connect to the active members of a StatefulSet, you should query the CNAME of the Headless Service (nginx.default.svc.cluster.local). The SRV records associated with the CNAME will contain only the Pods in the StatefulSet that are Running and Ready.

If your application already implements connection logic that tests for liveness and readiness, you can use the SRV records of the Pods ( web-> 0.nginx.default.svc.cluster.local, web-1.nginx.default.svc.cluster.local), as they are stable, and your application will be able to discover the Pods’ addresses when they transition to Running and Ready.

-- Adam
Source: StackOverflow

8/30/2017

I would strongly suggest taking a glance at the service docs to make sure you're familiar with what is happening:

https://kubernetes.io/docs/concepts/services-networking/service/

A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service.

With that in mind and the guide you're using, note the following:

You can tell this is a Headless Service because the clusterIP is set to “None.” Other than that, it looks exactly the same as any normal Kubernetes Service.

So what you've created is a headless service(no load balancer or exposed IPs)

So instead of the configuration given for a headless service:

    apiVersion: v1
    kind: Service
    metadata:
      name: mongo
      labels:
        name: mongo
    spec:
      ports:
      - port: 27017
        targetPort: 27017
      clusterIP: None
      selector:
        role: mongo

What you actually want is:

    apiVersion: v1
    kind: Service
    metadata:
      name: mongo
      labels:
        name: mongo
    spec:
      ports:
      - protocol: TCP
        port: 27017
        targetPort: 27017
      selector:
        role: mongo

Very subtle but you'll notice that the clusterIP property no longer exists.

I also prefer to specify the protocol, for completeness, even though TCP is the default.

-- Spets
Source: StackOverflow

2/1/2017

You need to expose the service (svc). The pod by definition would as you said have different IP's.

In the example mentioned at https://kubernetes.io/docs/user-guide/petset/ , you will notice the service definition.

foo.default.svc.cluster.local
         |service|
         /       \
| pod-asdf |    | pod-zxcv |

This is what you need to concentrate on.The service once tied to DNS would give you a stable lookup. By the way , StatefulSets is the maturing of earlier Pet Sets.

-- jkantihub
Source: StackOverflow