Kubernetes installed on premise, nginx-ingress
a service with multiple pods on multiple nodes All this nodes are working as an nginx ingress.
The problem is when a request come from a load balancer can jump to another worker that have a pod, this cause unecesary trafic inside the workers network, I want to force when a request come from outside to the ingress, the ingress always choice pods on the same node, in case no pods then can forward to other nodes.
More or less this image represent my case. example
I have the problem in the blue case, what I expect is the red case.
I saw exist the "externalTrafficPolicy: Local" but this only work for serviceType nodePort/loadBalancer, nginx ingress try to connect using the "clusterIP" so it skips this functionality. There are a way to have this feature working for clusterIP or something similar? I started to read about istio and linkerd, they seem so powerful but I don't see any parameter to configure this workflow.
You have to deploy an Ingress Controller using a NodeSelector
to deploy it to specific nodes, named ingress
or whatever you want: so you can proceed to create an LB on these node IPs using simple health-checking on port 80 and 443 (just to update the zone in case of node failure) or, even better, with a custom health-check endpoint.
As you said, the externalTrafficPolicy=Local
works only for Load-Balancer services: dealing with on-prem clusters is tough :)