One Traefik Pod in Kubernetes fails with error: 'command traefik error: field not found, node: redirect'

9/18/2019

I'm running Traefik on a Kubernetes cluster to manage Ingress, which has been running ok for a long time. I recently implemented Cluster-Autoscaling, which works fine except that on one Node (newly created by the Autoscaler) Traefik won't start. It sits in CrashLoopBackoff, and when I log the Pod I get: [date] [time] command traefik error: field not found, node: redirect. Google found no relevant results, and the error itself is not very descriptive, so I'm not sure where to look. My best guess is that it has something to do with the RedirectRegex Middleware configured in Traefik's config file:

    [entryPoints.http.redirect]
    regex = "^http://(.+)(:80)?/(.*)"
    replacement = "https://$1/$3"

Traefik actually works still - I can still access all of my apps from their urls in my browser, even those which are on the node with the dead Traefik Pod. The other Traefik Pods on other Nodes still run happily, and the Nodes are (at least in theory) identical.

-- Conagh
kubernetes
traefik

2 Answers

9/19/2019

The devs have a Migration Guide that looks like it may help.

"redirect" is gone but now there is "RedirectScheme" and "RedirectRegex" as a new concept of "Middlewares".

It looks like they are moving to a pipeline approach, so you can define a chain of "middlewares" to apply to an "entrypoint" to decide how to direct it and what to add/remove/modify on packets in that chain. "backends" are now "providers", and they have a clearer, modular concept of configuration. It looks like it will offer better organization than earlier versions.

-- mainmachine
Source: StackOverflow

9/18/2019

After further googling, I found this on Reddit. Turns out Traefik updated a few days ago to v2.0, which is not backwards compatible. Only this pod had the issue, because it was the only one for which a new (v2.0) image was pulled (being the only recently created Node). I reverted to v1.7 until I have time to fix it properly. Had update the Daemonset to use v1.7, then kill the Pod so it could be recreated from the old image.

-- Conagh
Source: StackOverflow