Kubernetes pod can't reach external ip address

4/27/2019

I am setting up a k8s test cluster environment. But the pods deployed by k8s can't reach external ip address.

The pod ip address is 173.16.2.5/24 The node has ip 10.168.99.198/24 on interface eth0 and 173.16.2.1/24 on cni network.

  1. Ping 10.168.99.197 from node is working:
#ping 10.168.99.197
PING 10.168.99.197 (10.168.99.197) 56(84) bytes of data.
64 bytes from 10.168.99.197: icmp_seq=1 ttl=64 time=0.120 ms
  1. But ping the same ip from busybox pod failed:
#ping 10.168.99.197
PING 10.168.99.197 (10.168.99.197): 56 data bytes
<-- no response

Route on busybox container created by k8s:

# ip route
default via 173.16.2.1 dev eth0
10.244.0.0/16 via 173.16.2.1 dev eth0
173.16.2.0/24 dev eth0 scope link  src 173.16.2.5

If I start a busybox container which not created by k8s, the network is fine: Route on busybox container created by docker:

# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 scope link  src 172.17.0.2
# ping 10.168.99.197
PING 10.168.99.197 (10.168.99.197): 56 data bytes
64 bytes from 10.168.99.197: seq=0 ttl=63 time=0.554 ms

Route table on the node:

# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         localhost       0.0.0.0         UG    0      0        0 eth0
10.168.99.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
link-local      0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
173.16.0.0      173-16-0-0.clie 255.255.255.0   UG    0      0        0 flannel.1
173.16.1.0      173-16-1-0.clie 255.255.255.0   UG    0      0        0 flannel.1
173.16.2.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0

How to resolve this problem to make the pods created by k8s reach external ip?

-- Allen
kubernetes

1 Answer

5/7/2019

The reason of pod unable to reach external ip is that the flannel network configuration not match to cni network, change flannel setting resolved this problem:

# kubectl get configmap -n kube-system -o yaml kube-flannel-cfg
...
  net-conf.json: |
    {
      "Network": "172.30.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
...
-- Allen
Source: StackOverflow