I have deployed the nginx Ingress that deploys service of type LoadBalancer with a public IP. externalTrafficPolicy
is set to Local
to preserve the client IP.
The Azure load balancer is well configured will all the nodes and the healthcheck is there to "disable" the nodes without the LB pod.
From the direction Internet => pod, it is working well. But when a POD tries to make a request using the domain associated to the public IP of the LB it fails when the POD does not run on the same node than one of the POD of the LB.
For that node the ipvsadm -Ln
command returns:
TCP PUBLICIP:80 rr
TCP PUBLICIP:443 rr
For the node that run the POD
TCP PUBLICIP:80 rr
-> 10.233.71.125:80 Masq 1 4 0
TCP PUBLICIP:443 rr
-> 10.233.71.125:443 Masq 1 0 0
The IPVS configuration seems legit according to the documentation:
Is it an issue or a limitation?
If it is a limitation how to workaround that? eg.
Is there others solutions?
thank you!
Versions/Additional details:
I ended up implementing what is suggested in this issue and this article.
I added the following snippet to the CoreDNS ConfigMap
rewrite stop {
name regex example\.com nginx-ingress-controller.my-ns.svc.cluster.local
answer name nginx-ingress-controller.my-ns.svc.cluster.local example.com
}
It used the rewrite plugin. Worked well, the only downside is that it relies on a static definition of the ingress controller fqdn.