Endpoints URLs management in kubernetes

9/25/2017

I'm a super beginner with Kubernetes and I'm trying to imagine how to split my monolithic application in different micro services. Let's say i'm writing my micro services application in Flask and each of them exposes some endpoints like:

Micro service 1:

  • /v1/user-accounts

Micro service 2:

  • /v1/savings

Micro service 3:

  • /v1/auth

If all of them were running as blueprints in a monolithic application all of them would be prefixed with the same IP, that is the IP of the host server my application is running on, like 10.12.234.69, eg.

Now, deploying those 3 "blueprints" on 3 different POD/Nodes in Kubernetes will change the IP address of each endpoint having maybe 10.12.234.69, than 10.12.234.70 or 10.12.234.75

How can i write an application that keep the URL reference constant even if the IP address changes?

  • Would a Load Balancer Service do the trick?
  • Maybe the Service Registry feature of Kubernetes does the "DNS" part for me?

I know It can sounds very obvious question but still I cannot find any reference/example to this simple problem.

Thanks in advance!

EDIT: (as follow up to simon answer)

questions:

  • given the fact that the Ingress service spawns a load balancer and is possible to have all the routes reachable from the http/path prefixed by the IP (http://<ADDRESS>/v1/savings) of the load balancer, how can I associate IP to the load balancer to match the ip of the pod on which flask web server is running?

  • in case I add other sub routes to the same paths, like /v1/savings/get and /v1/savings/get/id/<var_id> , should i update all of them in the ingress http path in order for them to be reachable by the load balancer ?

-- Francesco Di Benedetto
flask
google-cloud-platform
kubernetes
url

1 Answer

9/25/2017
  1. A load balancer is what you are looking for.
  2. Kubernetes services will make your pods accessible under a given hostname cluster-internally.

If you want to make your services accessible from outside the cluster under a single IP and different paths, you can use a load balancer and Kubernetes HTTP Ingresses. They define under which domain and path a service should be mapped and can be fetched by a load balancer to build its configuration.

Example based on your micro service architecture:

Mocking applications

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: user-accounts
spec:
  template:
    metadata:
      labels:
        app: user-accounts
    spec:
      containers:
      - name: server
        image: nginx
        ports:
        - containerPort: 80
        args: 
        - /bin/bash
        - "-c" 
        - echo 'server { location /v1/user-accounts { return 200 "user-accounts"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: savings
spec:
  template:
    metadata:
      labels:
        app: savings
    spec:
      containers:
      - name: server
        image: nginx
        ports:
        - containerPort: 80
        command: 
        - /bin/bash 
        - "-c" 
        - echo 'server { location /v1/savings { return 200 "savings"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: auth
spec:
  template:
    metadata:
      labels:
        app: auth
    spec:
      containers:
      - name: server
        image: nginx
        ports:
        - containerPort: 80
        command: 
        - /bin/bash 
        - "-c" 
        - echo 'server { location /v1/auth { return 200 "auth"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'

These deployments represent your services and just return their name via HTTP under /v1/name.

Mapping applications to services

---
kind: Service
apiVersion: v1
metadata:
  name: user-accounts
spec:
  type: NodePort
  selector:
    app: user-accounts
  ports:
  - protocol: TCP
    port: 80
---
kind: Service
apiVersion: v1
metadata:
  name: savings
spec:
  type: NodePort
  selector:
    app: savings
  ports:
  - protocol: TCP
    port: 80
---
kind: Service
apiVersion: v1
metadata:
  name: auth
spec:
  type: NodePort
  selector:
    app: auth
  ports:
  - protocol: TCP
    port: 80

These services create an internal IP and a domain resolving to it based on their names, mapping them to the pods found by a given selector. Applications running in the same cluster namespace will be able to reach them under user-accounts, savings and auth.

Making services reachable via load balancer

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: example
spec:
  rules:
  - http:
      paths:
      - path: /v1/user-accounts
        backend:
          serviceName: user-accounts
          servicePort: 80
      - path: /v1/savings
        backend:
          serviceName: savings
          servicePort: 80
      - path: /v1/auth
        backend:
          serviceName: auth
          servicePort: 80

This Ingress defines under which paths the different services should be reachable. Verify your Ingress via kubectl get ingress:

# kubectl get ingress
NAME      HOSTS     ADDRESS   PORTS     AGE
example   *                   80        1m

If you are running on Google Container Engine, there is an Ingress controller running in your cluster which will spawn a Google Cloud Load Balancer when you create a new Ingress object. Under the ADDRESS column of the above output, there will be an IP displayed under which you can access your applications:

# curl http://<ADDRESS>/v1/user-accounts
user-accounts⏎
# curl http://<ADDRESS>/v1/savings
savings⏎
# curl http://<ADDRESS>/v1/auth
auth⏎
-- Simon Tesar
Source: StackOverflow