I have a cluster with container range 10.101.64.0/19
on a net A and subnet SA with ranges 10.101.0.0/18
. On the same subnet, there is VM in GCE with IP 10.101.0.4
and it can be pinged just fine from within the cluster, e.g. from a node with 10.101.0.3
. However, if I go to a pod on this node which got address 10.101.67.191
(which is expected - this node assigns addresses 10.101.67.0/24
or something), I don't get meaningful answer from that VM I want to access from this pod. Using tcpdump on icmp, I can see that when I ping that VM machine from the pod, the ping gets there but I don't receive ACK in the pod. Seems like VM is just throwing it away.
Any idea how to resolve it? Some routes or firewalls? I am using the same topology in the default subnet created by kubernetes where this work but I cannot find anything relevant which could explain this (there are some routes and firewall rules which could influence it but I wasn't successful when trying to mimic them in my subnet)
I think it is a firewall issue. Here I've already provided the solution on Stakoverflow. It may help to solve your case.