Kubernetes Cluster - Using L3 routing - Cannot ping pods across network

3/9/2019

I am setting up a cluster from scratch using 'Learn Kubernetes the Hard Way'. I am noticing that pods across nodes are not able to communicate. Actually they are not able to reach the internet too.

It seems like the cni0 interface which is the default gateway is not routing the packets properly.

I am not using any network drivers like calico or flannel just basic L3 routing.

Here is my setup:

2 Nodes with external ips:

  1. 10.10.10.21-22
  2. POD CIDR - 10.200.1-2.0/24
  3. CNI Bridge conf below:

    {
    "cniVersion": "0.3.1",
    "name": "bridge",
    "type": "bridge",
    "bridge": "cnio0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "ranges": [
          [{"subnet": "10.200.2.0/24"}]
        ],
        "routes": [{"dst": "0.0.0.0/0"}]
    }}
    
  4. Kubelet Service:

    [Unit]
    Description=Kubernetes Kubelet
    Documentation=https://github.com/kubernetes/kubernetes
    After=containerd.service
    Requires=containerd.service
    
    [Service]
    ExecStart=/usr/local/bin/kubelet \
      --config=/var/lib/kubelet/kubelet-config.yaml \
      --container-runtime=remote \
      --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \
      --image-pull-progress-deadline=2m \
      --kubeconfig=/var/lib/kubelet/kubeconfig \
      --network-plugin=cni \
      --node-ip="10.10.10.22"
      --address="10.10.10.22"
      --register-node=true \
      --v=2
    Restart=on-failure
    RestartSec=5
    
    [Install]
    WantedBy=multi-user.target

Routes for each node are as below(take from one node):

  Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         10.0.2.2        0.0.0.0         UG        0 0          0 eth0
0.0.0.0         192.168.0.1     0.0.0.0         UG        0 0          0 eth1
10.0.2.0        0.0.0.0         255.255.255.0   U         0 0          0 eth0
10.10.10.0      0.0.0.0         255.255.255.0   U         0 0          0 eth2
10.200.1.0      10.10.10.21     255.255.255.0   UG        0 0          0 eth2
10.200.2.0      0.0.0.0         255.255.255.0   U         0 0          0 cnio0
192.168.0.0     0.0.0.0         255.255.255.0   U         0 0          0 eth1

From the worker nodes - i am able to reach the pods from the other node. It is only from within the container that routing does not seem to work.

Route table from the pod:

/ # netstat -nr
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         10.200.2.1      0.0.0.0         UG        0 0          0 eth0
10.200.2.0      0.0.0.0         255.255.255.0   U         0 0          0 eth0

Please note that the default gateway here is 10.200.2.1 for worker-2 - this maps to the interface cni0 of worker 2.

worker-2 workerbins]$ ifconfig
cnio0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.200.2.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::1cc0:7fff:fe7f:1b55  prefixlen 64  scopeid 0x20<link>
        ether 0a:58:0a:c8:02:01  txqueuelen 1000  (Ethernet)
        RX packets 37801  bytes 2660893 (2.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20394  bytes 2502884 (2.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Please assist

-- Vijay Nidhi
cni
kubernetes

0 Answers