Is a reconfiguration of running Ingress in Kubernetes possible without downtime?

3/26/2018

We are currently facing the following situation:

Ingress1_legacy: service.domain.com
  / >> service_legacy

Ingress2_new: service_one.domain.com, service_two.domain.com
  /one >> service_new_one
  /two >> service_new_two

Our plan is to seamlessly redirect service.domain.com to service_new_one. The idea was now to edit Ingress1 to point to service_new_one like this:

Ingress1_legacy (updated): service.domain.com
  / >> service_new_one

What we experience is, that as soon as we change the configuration of Ingress1_legacy, calls to service.domain.com result in 502. This situation persisted long enough for us to better roll back to the original configuration.

So is this a feasible strategy? Is our assumption right, that the changed configuration of the service-route in the Ingress should allow for a seamless, immediate migration to the other service? Or would a change of an Ingress configuration normally lead to some downtime of the loadbalancing?

-- Markward Schubert
google-cloud-platform
google-kubernetes-engine
kubernetes
kubernetes-ingress

1 Answer

3/30/2018

Short Answer: when you update the ingress resource there is a small downtime due to the process needed both in the Google Cloud Platform and in the Kubernetes one.

I do not exclude that there are way to minimise or cancel the downtime, but if you simply update the ingress you will experience it.


Small experiment We have the following ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /index.html
        backend:
          serviceName: httpd
          servicePort: 80
      - path: /apache
        backend:
          serviceName: nginx
          servicePort: 80

And 4 service nginx, nxing2, httpd and httpd2 ponting to 4 different deployment each of them on a different node.

Run:

kubectl run nginx2 --image=nginx
kubectl run nginx2 --image=nginx
kubectl run httpd --image=httpd:2.4
kubectl run httpd2 --image=httpd:2.4

and create:

kind: Service
apiVersion: v1
metadata:
  name: httpd
spec:
  selector:
    run: httpd2
  type: LoadBalancer
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
---
kind: Service
[...]

We connect to http://ingress-ip/index.html and we visualise the classical "It works!"

As soon as you change the ingress to point nginx2 and httpd2:

  • ~ 1 minute continue to serve old service
  • ~ 1 minute /nginx leads you to "Error: Server Error" and /index.html "default backend - 404"
  • after 3 minutes finally we go back to a stable situation
-- GalloCedrone
Source: StackOverflow