How linux forwards packets to k8s service

2/25/2020

I have installed k8s on VM. Now I have access to some k8s services from this VM, like

[root@vm ~]# netcat -vz 10.96.0.10 9153
kube-dns.kube-system.svc.cluster.local [10.96.0.10] 9153 open

10.96.0.10 is ClusterIP of kube-dns service.

My question is how Linux forwards requests to 10.96.0.10 to right destination?

I don't see any interfaces with 10.96.0.10 IP and any routing rules for 10.96.0.10 on VM.

[root@vm ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:00:00:10 brd ff:ff:ff:ff:ff:ff
    inet 192.168.4.104/24 brd 192.168.4.255 scope global dynamic noprefixroute ens3
       valid_lft 33899sec preferred_lft 28499sec
    inet6 fe80::ca7d:cdfe:42a3:75f/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:cd:1d:8a:77 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.0.219.64/32 brd 10.0.219.64 scope global tunl0
       valid_lft forever preferred_lft forever
7: calib9d0c90540c@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
       valid_lft forever preferred_lft forever
8: cali81206f5bf92@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
       valid_lft forever preferred_lft forever
[root@vm ~]# ip route
default via 192.168.4.1 dev ens3 proto dhcp src 192.168.4.104 metric 202
10.0.189.64/26 via 192.168.4.107 dev tunl0 proto bird onlink
blackhole 10.0.219.64/26 proto bird
10.0.219.107 dev calib9d0c90540c scope link
10.0.219.108 dev cali81206f5bf92 scope link
10.0.235.128/26 via 192.168.4.105 dev tunl0 proto bird onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.4.0/24 dev ens3 proto dhcp scope link src 192.168.4.104 metric 202
-- Kirill Bugaev
kubernetes
linux
networking

1 Answer

2/25/2020

kubelet manages iptables NAT rules to route the traffic to actual endpoints of a service. So the service IP is purely virtual and gets rewritten in a round-robin fashing across all endpoints of a service.

-- Thomas
Source: StackOverflow