How can I setup 2 different containers to run on 2 different DNS names in Kubernetes?

11/15/2016

So I've got my backend and front-end as seperate containers in the one Kubernetes Deployment.

At the moment I'm having to access the front-end & backend via different ports.

E.g example.com:5000 = frontend & example.com:7000 = backend

I'm wondering how I can setup my front-end container to run on www.example.com & my backend container to run on api.example.com

I'm using gcp (google cloud), have setup my dns properly & am having to access the services (web apps) using the ports I assigned to each of them (5000=frontend,7000=backend).

I'm thinking of a possible solution which is manual, but am wondering whether there is something built into Kubernetes. This solution would be:

I'd setup an nginx container in my Kubernetes cluster that would run on port 80, so any request that comes through would be redirected to the appropriate ports:

E.g I could have api.example.com point to <my_cluster_ip>/backend & the same for my front-end <my_cluster_ip>/frontend and let nginx point /backend to port 5000 and /frontend to port 7000

I'm hoping there is something built into kubernetes that I can use? Here is my deployment config as it stands:

{
  "apiVersion": "extensions/v1beta1",
  "kind": "Deployment",
  "metadata": {
    "name": "my_container"
  },
  "spec": {
    "replicas": 1,
    "template": {
      "metadata": {
        "labels": {
          "app": "my_app"
        }        
      },
      "spec": {
        "containers": [
          {
            "name": "backend",
            "image": "backend_url",
            "ports": [
              {
                "containerPort": 7000
              }
            ],
            "imagePullPolicy": "Always",
            "env": [
              {
                "name": "NODE_PORT",
                "value": "7000"
              },
              {
                "name": "NODE_ENV",
                "value": "production"
              }
            ]
          },
          {
            "name": "frontend",
            "image": "frontend_url",
            "ports": [
              {
                "containerPort": 5000
              }
            ],
            "imagePullPolicy": "Always",
            "env": [
              {
                "name": "PORT",
                "value": "5000"
              },
              {
                "name": "NODE_ENV",
                "value": "production"
              }
            ]
          }
        ]
      }
    }
  }
}
-- James111
dns
google-cloud-platform
kubernetes
nginx

2 Answers

11/15/2016

Well, for starters, you should not base exposing your service on Deployment. To do that, you should cover your Deployment(s) with Service(s). Read up on http://kubernetes.io/docs/user-guide/services/ for that.

When you go through the lecture, you might notice that it is perfectly possible to set two services that match the same backing pods (Endpoints) but on different port (ie. front:80->5000 api:80->7000). The problem is, that it still is exposing your work only inside the k8s cluster. To publish it externally you can use Service of type NodePort or LoadBalancer (first one has the disadvantage of using high ports to expose your services to the public, second one will be a separate LB (hence IP) per service).

What I personally prefer to publicly expose services is using Ingress/IngressController http://kubernetes.io/docs/user-guide/ingress/

Finally, when you split your solution with two services (front/api) you will see that there is no real reason to keep them together in one deployment/pod. If you separate them as two distinct deployments, you will get a more flexible architecture, and more fine grained control over your solution.

-- Radek 'Goblin' Pieczonka
Source: StackOverflow

11/15/2016

Using nginx to route requests to k8s IP addresses is unnecessary. In order to make it work you need to have the pods DNS name which includes its IP address. If you scale that pod, you need to modify the nginx config each time with the new hostname/DNS name. If your pods get killed, there is no guarantee, they`ll get the same IPs after restart. So basically, not a good approach.

Probably a better design is to separate the front end from the back end. This way you can deploy them independently. The back end will probably be more resource hungry and when you scale it, you do not need to carry the front end along and share resources with it.

If you choose to separate your services, check out k8s services. They are easy to understand and fast to set up. After you create a k8s service for front end and back end, you can get DNS name resolving to the name you gave your service automatically(and use it in your code of course).

Since you are using gke, you can expose those services(or only the frontend) to the world via a loadbalancer by using k8s ingress

-- Dimitar Damyanov
Source: StackOverflow