Conditional reverse-proxy in kubernetes based on server response

4/3/2020

TLDR: I am looking for a solution that would allow me to proxy traffic between two different Kubernetes services, based on their response.

Background: I have an exiting application hosted on Kubernetes. Recently I started re-writing one of my microservices in order to speed it up and add few new features. I want to allow my users to decide whether they want to start using this new service or stick to the old one (since some features have breaking changes for their use-case). Since users usually reach this microservice using address like username.given-microservice.example.com, my initial plan was to set up some proxy between these services, which could ask one of my endpoints using query like: http://my-new-service.example.com/enabled-for-client?=username

  • if it returned code 200, then the client would be forwarded to the new service
  • if the response code was anything else, then the client would be forwarded to the old service.

Of course, the response from the URI above would depend on user settings.

This scenario is very similar to A/B testing, but I do not know and have trouble finding any way to set up proxy based on URL response.

I would highly appreciate any suggestions, blog posts, or links to documentation that could help me solve my scenario - at the moment I ran out of ideas and I kind of feel stuck.

-- Mossie93
kubernetes
reverse-proxy

2 Answers

4/3/2020

Envoy can manage such scenario, start by looking to HTTP routing. If you cannot found what you're looking for, you can always write filter/routing rules in Lua.

-- Kartoch
Source: StackOverflow

4/6/2020

It's possible to achieve this by using NGINX Ingress with custom-http-errors and default-backend annotations.

I created a lab to prove the concept. Let's dive into it together.

First of all you have to install NGINX Ingress in your cluster. If you don't have it, follow the installation guide.

On this POC we will deploy 2 different applications. One is called old-http-backend and it serves a default nginx landing page. The second is called new-http-backend and it serves a echo-server landing page.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: old-http-backend
spec:
  selector:
    matchLabels:
      app: old-http-backend
  template:
    metadata:
      labels:
        app: old-http-backend
    spec:
      containers:
      - name: old-http-backend
        image: nginx
        ports:
        - name: http
          containerPort: 80
        imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
  name: old-http-backend
spec:
  selector:
    app: old-http-backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: new-http-backend
spec:
  selector:
    matchLabels:
      app: new-http-backend
  template:
    metadata:
      labels:
        app: new-http-backend
    spec:
      containers:
      - name: new-http-backend
        image: inanimate/echo-server
        ports:
        - name: http
          containerPort: 8080
        imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
  name: new-http-backend
spec:
  selector:
    app: new-http-backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

After applying this manifest we have the following deployments and services:

$ kubectl get deployments 
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
new-http-backend   1/1     1            1           2s
old-http-backend   1/1     1            1           43m
$ kubectl get service
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes         ClusterIP   10.31.240.1     <none>        443/TCP   152m
new-http-backend   ClusterIP   10.31.240.168   <none>        80/TCP    44m
old-http-backend   ClusterIP   10.31.242.175   <none>        80/TCP    44m

And now we can apply our Ingress that will be responsible for doing all the magic for us:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-app-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: "/"
    nginx.ingress.kubernetes.io/custom-http-errors: '403,404,500,502,503,504'
    nginx.ingress.kubernetes.io/default-backend: old-http-backend
spec:
  rules:
  - host: app.company.com
    http:
      paths:
      - path: "/"
        backend:
          serviceName: new-http-backend
          servicePort: 80

What this ingress is doing?

In the Custom HTTP Errors documentation we cant read that if a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend).

So by inserting these annotations in our ingress rule we are saying that all requests should go to new-http-backend unless it receives a return code listed on custom-http-errors. If that happens, the user will be redirected to old-http-backend as it's specified on default-backend annotation.

nginx.ingress.kubernetes.io/custom-http-errors: '403,404,500,502,503,504'
nginx.ingress.kubernetes.io/default-backend: old-http-backend
-- mWatney
Source: StackOverflow