I may be running into a situation that is completely normal. But I want to talk it out anyway. In my home lab, I have a single worker node Rancher-controlled k3s cluster. I also have a FRR VM acting as the BGP peer to MetalLB within the cluster, since a UDM Pro cannot run BGP natively. I spun up a simple nginx 1-pod deployment and backing service with LoadBalancer IP. Everything did its jobs, and the IP is accessible.
The FRR router VM has a single vNIC, no tunnels or subinterfaces, etc. Accessing the nginx service LoadBalancer IP by HTTP is perfectly fine, so I know routing is fine. But from a ping and traceroute perspective, it looks like I have a routing loop.
Client traceroute:
PS C:\Users\sbalm> tracert -d 192.168.110.1
Tracing route to 192.168.110.1 over a maximum of 30 hops
1 <1 ms <1 ms <1 ms 192.168.0.1
2 <1 ms <1 ms <1 ms 192.168.100.2
3 1 ms <1 ms <1 ms 192.168.100.11
4 <1 ms <1 ms <1 ms 192.168.0.1
5 <1 ms <1 ms <1 ms 192.168.100.2
6 1 ms <1 ms <1 ms 192.168.100.11
7 <1 ms <1 ms <1 ms 192.168.0.1
8 1 ms <1 ms <1 ms 192.168.100.2
9 1 ms <1 ms <1 ms 192.168.100.11
...
Something doesn't feel "normal" here. Ideas?