We're trying to implement a traffic manager over the top of our Azure Kubernetes services so we can run a cluster in 2 regions (uk west and south) and balance across both regions.
The actual traffic manager seems to be working ok, but in the azure portal its showing as being degraded, and in the ingress controller logs on the k8 cluster i can see a request that looks like this
[18/Sep/2019:10:40:58 +0000] "GET / HTTP/1.1" 404 153 "-" "Azure Traffic Manager Endpoint Monitor" 407 0.000 [-]
So the traffic manager is firing of a request, its hitting the ingress controller but it obviously cant resolve that path so its returning a 404.
I've had a play about with the Custom host header setting to point them to a health check endpoint in on of the pods, it did kind of work for a bit but then it seemed to go back to doing a GET on / so it went into degraded again (yeah i know sounds odd).
Even if that worked i dont really want to have to point it at a specific pod endpoint in case that is really down for some reason. Is there something we can do in the ingress controller config to make it respond with a 200 so the traffic manager knows that its up?
Cheers
I would suggest you to switch to TCP based probing for a quick fix. You can change the protocol to TCP and choose the port where your AKS is listening.
If the 3 way handshake to the port fails, then the probe is considered failed.
Why not expose a simple health check endpoint on the same pod where app is hosted rather than a different pod? If at all you deploy a work around to return http 200 from ingress controller and if the backend is down then the traffic will still be routed which defeats the reason to have a probe.