Kubernetes: Multiple containers that have to communicate + exposed nodePort

6/15/2018

In my setup, there is a set of containers that were initially built to run with docker-compose. After moving to Kubernetes I'm facing the following challenges:

  1. docker-compose managed to provide some type of internal DNS that allowed a container to be addressed by its name. How do I create such a network in Kubernetes? What I found so far:

    • they could all be part of one pod and therefore communicate via localhost
    • they could all have a common label with matching key:value pairs and a service, but how does one handle Ports?
  2. I need to expose an internal Port to a certain NodePort as it has to publicly available. How does such a service config look like? What I found so far:

    • something like this:

      apiVersion: v1
      kind: Service
      metadata:
        labels:
          app: frontend
        name: frontend-nodeport
      spec:
        type: NodePort
        ports:
        - name: "3000-30001"
          port: 3000
          nodePort: 30001
        selector:
          app: frontend
      status:
        loadBalancer: {}`
-- Gerrit Sedlaczek
kubernetes

2 Answers

6/15/2018

Docker-compose managed to provide some type of internal DNS that allowed a container to be addressed by its name. How do I create such a network in Kubernetes?

As you researched, you can, indeed have two approaches:

  • IF you containers are to be scaled together then place them inside same pod and communicate through localhost over separate ports. This is less likely your case since this approach is more suitable when containerized app is more similar to processes on one physical box than a separate service/server.

  • IF your containers are to be scaled separaltey, which is more probably your case, then use service. With services, in place of localhost (in previous point) you will either use just service name as it is (if pods are in same namespace) or FQDN (servicename.namespace.svc.cluster.local) if services are accessed across namespaces. As opposed to previous point where you had to have different ports for your containers (since you address localhost), in this case you can have same port across multiple services, since service:port must be unique. Also with service you can remap ports from containers if you wish to do so as well.

Since you asked this as an introductory question two words of caution:

  • service resolution works from standpoint of pod/container. To test it you actually need to exec into actual container (or proxy from host) and this is common confusion point. Just to be on safe side test service:port accessibility within actual container, not from master.
  • Finally, just to mimic docker-compose setup for inter-container network, you don't need to expose NodePort or whatever. Service layer in kubernetes will take care of DNS handling. NodePort has different intention.

I need to expose an internal Port to a certain NodePort. How does such a service config look like?

You are on a good track, here is nice overview to get you started, and reference relevant to your question is given below:

apiVersion: v1
kind: Service
metadata:  
  name: my-nodeport-service
selector:    
  app: my-app
spec:
  type: NodePort
  ports:  
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30036
    protocol: TCP

Edit: Could you please provide an example of how a service.yaml would look like if the containers are scaled seperately ?

  • First one is, say, api server, we'll call it svc-my-api, it will use pod(s) labeled app: my-api and communicate to pod's port 80 and will be accessible by other pods (in the same namespace) as host:svc-my-api and port:8080

    apiVersion: v1
    kind: Service
    metadata:
      name: svc-my-api
      labels:
        app: my-api
    spec:
      selector:
        app: my-api
      ports:
      - protocol: TCP
        port: 8080
        targetPort: 80
  • Second one is, say, mysql server, we'll call it svc-my-database. Supposing that containers from api pods (covered by previous service) want to access database, they will use host:svc-my-database and port:3306.

    apiVersion: v1
    kind: Service
    metadata:
      name: svc-my-database
      labels:
        app: my-database
    spec:
      selector:
        app: my-database
      ports:
      - name: http
        protocol: TCP
        port: 3306
        targetPort: 3306
-- Const
Source: StackOverflow

6/15/2018

1.- You can add some parameters to your pod resource (or any other that is going to create a pod), as follows:

...
spec:
  hostname: foo-{1..4}        #keep in mind this line
  subdomain: bar              #and this line
  containers:
  - image: busybox
...

Note: so imagine you just created 4 pods, with hostname foo-1, foo-2, foo-3 and foo-4. These are separate pods. You can't do foo-{1..4}. So this is just for demo purposes.

If you now create a service with the same name as the subdomain, you would be able to reach the pod from anywhere in the cluster by hostname.service-name.namespace.svc.cluster.local.

Example:

apiVersion: v1
kind: Service
metadata:
  name: bar     #my subdomain is called "bar", so is this service
spec:
  selector:
    app: my-app
  ports:
  - name: foo
    port: 1234
    targetPort: 1234

Now, say I have the label app: my-app in my pods, so the service is targeting them correctly.

At this point, look what happens (from any pod, within the cluster):

/ # nslookup foo-1.bar.my-namespace.svc.cluster.local
Server:    10.63.240.10
Address 1: 10.63.240.10 kube-dns.kube-system.svc.cluster.local

Name:      foo-1.bar.my-namespace.svc.cluster.local
Address 1: 10.60.1.24 foo-1.bar.my-namespace.svc.cluster.local

2.- The second part of your question is almost correct. This is a NodePort service:

apiVersion: v1
kind: Service
metadata:
  name: svc-nodeport
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app: my-app
  type: NodePort

This service runs on port 80, so it is reachable on port 80 from within the cluster. It will map the port to a random port over 30000 on the node. Now this same service is available on port 30001 (for example) of the node from outside world. Finally it will forward the requests to the port 8080 of the container.

-- suren
Source: StackOverflow