pod routes don't match IP

2/3/2017

I'm using Kubernetes 1.5.2 in CoreOS 1235.6.0 on bare metal, with calico v1.0.2 for the overlay network. Containers are getting correct IP addresses, but their routes don't match:

/ # ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 82:df:73:ee:d1:15 brd ff:ff:ff:ff:ff:ff
    inet 10.2.154.97/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::80df:73ff:feee:d115/64 scope link
       valid_lft forever preferred_lft forever
/ # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         169.254.1.1     0.0.0.0         UG    0      0        0 eth0
169.254.1.1     0.0.0.0         255.255.255.255 UH    0      0        0 eth0

As a result, pod networking is broken. Outgoing traffic times out, whether it's ICMP or TCP, and whether it's to the host, another pod on the same host, the apiserver, or the public Internet. The only traffic that works is this pod talking to itself.

Here's how I'm running kubelet:

[Unit]
After=network-online.target
Wants=network-online.target
[Service]
Environment=KUBELET_VERSION=v1.5.2_coreos.0
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
  --volume var-log,kind=host,source=/var/log \
  --mount volume=var-log,target=/var/log \
  --dns=host \
  --volume cni-conf,kind=host,source=/etc/cni \
  --mount volume=cni-conf,target=/etc/cni \
  --volume cni-bin,kind=host,source=/opt/cni/bin \
  --mount volume=cni-bin,target=/opt/cni/bin"
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=/usr/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
  --allow-privileged=true \
  --api-servers=https://master.example.com \
  --cluster_dns=10.3.0.10 \
  --cluster_domain=cluster.local \
  --container-runtime=docker \
  --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
  --network-plugin=cni \
  --pod-manifest-path=/etc/kubernetes/manifests \
  --tls-cert-file=/etc/kubernetes/ssl/worker.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

The calico config is the standard one.

What have I misconfigured?

-- Chris Jones
coreos
docker
kubernetes
project-calico

1 Answer

2/9/2017

The addressing and routes inside the container look fine. The routes outside the container on the host would be more interesting. Given what you've seen (veth created, which implies the CNI plugin is working), I'd check that the policy controller and calico-node are running properly (e.g. no error logs / restart loops).

You might also want to try and get live support from the community: register at https://slack.projectcalico.org

-- Matt Dupre
Source: StackOverflow