We have an nignx ingress in our kubernetes cluster, and it forwards traffic to a cluster-ip service.
We thought that traffic would go directly from the node(s) running the ingress to the node(s) running the target service's pods. We are seeing behavior that contradicts this though (avoiding details as it's complex and will cause distraction).
Our assumption was that the aws-node/kube-proxy daemons would set up the ip-tables on each node to directly go to the correct node(s) for cluster-ip services. Is this true or false and why? Is it possible that the cluster-ip service gets routed through other nodes in the cluster for some reason on the way?
After spending a lot of time on this, I found some good resources:
The info in the first link helped us to follow the IP tables through from the nginx-ingress-controllers to the target pods of the destination cluster-ip service.
In this case, I can confirm that traffic flows directly between the two and does not route through any other nodes.
We are still having our issues, but apparently they are not related to traffic being routed through extra nodes.