Use traefik to loadbalance kubernetes apiserver

4/30/2018

we're currently trying out traefik and consider using it as ingress controller for our internal kubernetes cluster.

Now I wonder if it is possible to use traefik to loadbalance the kube-apiserver? We have a HA setup with 3 masters.

How would I proceed here?

Basically I just want to loadbalance the API requests from all nodes in the cluster between the 3 masters.

Should I just run traefik outside the cluster?

I'm trying to wrap my head around this... I'm having a hard time to understand how this could work together with traefik as ingress controller.

Thanks for any input, much appreciated!

-- jesusofsuburbia
high-availability
kubernetes
kubernetes-ingress
load-balancing
traefik

1 Answer

4/30/2018

One way to achieve this is to use the file provider and create a static setup pointing at your API server nodes; something like this (untested)

[file]
[backends]
  [backends.backend1]
    [backends.backend1.servers]
      [backends.backend1.servers.server1]
        url = "http://apiserver1:80"
        weight = 1
      [backends.backend1.servers.server2]
        url = "http://apiserver2:80"
        weight = 1
      [backends.backend1.servers.server3]
        url = "http://apiserver3:80"
        weight = 1

[frontends]
  [frontends.frontend1]
    entryPoints = ["http"]
    backend = "backend1"
    passHostHeader = true

    [frontends.frontend1.routes]
      [frontends.frontend1.routes.route1]
        rule = "Host:apiserver"

(This assumes a simple HTTP-only setup; HTTPS would need some extra setup.)

When Traefik is given this piece of configuration (and whatever else you need to do either via the TOML file or CLI parameters), it will round-robin requests with an apiserver Host header across the three nodes.

Another at least potential option is to create a Service object capturing your API server nodes and another Ingress object referencing that Service and mapping the desirable URL path and host to your API server. This would give you more flexibility as the Service should adjust to changes to your API server automatically, which might be interesting when things like rolling upgrades come into play. One aggravating point though might be that Traefik needs to speak to the API server to process Ingresses and Services (and Endpoints, for that matter), which it cannot if the API server is unavailable. You might need some kind of HA setup or be willing to sustain a certain non-availability. (FWIW, Traefik should recover from temporary downtimes on its own.)

Whether you want run Traefik in-cluster or out-of-cluster is up to you. The former is definitely easier to setup if you want to process API objects as you won't have to pass in API server configuration parameters, though the same restrictions about API server connectivity needs apply if you want to go down the Ingress/Service route. With the file provider approach, you don't need to worry about that -- it is perfectly possible to run Traefik inside Kubernetes without using the Kubernetes provider.

-- Timo Reimann
Source: StackOverflow