Connection from a pod to a public ip backed by a LoadBalancer that go back to k8s

8/7/2019

I have deployed the nginx Ingress that deploys service of type LoadBalancer with a public IP. externalTrafficPolicy is set to Local to preserve the client IP.

The Azure load balancer is well configured will all the nodes and the healthcheck is there to "disable" the nodes without the LB pod.

From the direction Internet => pod, it is working well. But when a POD tries to make a request using the domain associated to the public IP of the LB it fails when the POD does not run on the same node than one of the POD of the LB.

For that node the ipvsadm -Ln command returns:

TCP  PUBLICIP:80 rr
TCP  PUBLICIP:443 rr

For the node that run the POD

TCP  PUBLICIP:80 rr
  -> 10.233.71.125:80             Masq    1      4          0         
TCP  PUBLICIP:443 rr
  -> 10.233.71.125:443            Masq    1      0          0         

The IPVS configuration seems legit according to the documentation:

Is it an issue or a limitation?

If it is a limitation how to workaround that? eg.

  • Deploy the LB as a DaemonSet, with the downside to have as much LB pod than node
  • Do not use the public domain but a kubernetes fqdn (not easy to implement)

Is there others solutions?

thank you!

Versions/Additional details:

  • k8s: 1.14.4
  • Cloud provider: Azure (not AKS)
-- Nicolas Labrot
azure
kubernetes

1 Answer

8/8/2019

I ended up implementing what is suggested in this issue and this article.

I added the following snippet to the CoreDNS ConfigMap

    rewrite stop {
      name regex example\.com nginx-ingress-controller.my-ns.svc.cluster.local
      answer name nginx-ingress-controller.my-ns.svc.cluster.local example.com
    }

It used the rewrite plugin. Worked well, the only downside is that it relies on a static definition of the ingress controller fqdn.

-- Nicolas Labrot
Source: StackOverflow