Connect from Kubernetes Pod to VM

11/27/2018

We are currently running a Kubernetes cluster on GCP. The cluster has several pods, in the default network 10.154.0.0/16. We have now created a new VM in the same network and assigned a static internal IP 10.154.0.4.

We are now trying to connect from a Pod to the freshly created VM, but we are only able to ping it. We installed a basic webserver on it that only the internal network is supposed to access, but it doesn't work.

Isn't it possible to access all ports on the internal network without creating any additional firewall rules?

Logs:

Ping VM from Pod (works)

root@censored-6d9f888f75-pncs4:/var/www# ping 10.154.0.4 PING 10.154.0.4 (10.154.0.4): 56 data bytes 64 bytes from 10.154.0.4: icmp_seq=0 ttl=63 time=1.636 ms

Accessing the webserver of the VM (not working) root@censored-6d9f888f75-pncs4:/var/www# curl 10.154.0.4 ^C

-- hresult
kubernetes

1 Answer

11/27/2018

Not sure if this is what's happening to you, but if you ssh into a node and run sudo iptables-save, there is this interesting rule...

-A POSTROUTING ! -d 10.0.0.0/8 -m comment --comment "kubenet: SNAT for outbound traffic from cluster" -m addrtype ! --dst-type LOCAL -j MASQUERADE

...that says that for destination IP addresses within 10.0.0.0/8 range, do not masquerade. If your pods are running in 172., or 192., that's the IP address they are making the requests with, which can be dropped, if the firewall rules and routes have not been properly configured.

-- suren
Source: StackOverflow