Weavenet network connection from Pod to service (Kubernetes 1.9 with kubeadm)

4/9/2018

I have a local 3-nodes-cluster of Kubernetes, with weave-Net as the Overlay-Network-Plugin installed. My goal is to try out network policies of kubernetes and to export log messages of this process towards an elk-stack.

Unfortunately, I cannot proceed, because I cannot solve my issues with kube-dns. The name resolution seems to work, but the network connection from a pod to a service is problematic.

Here some facts about my setup (See below for versions/general config details):

  • I am logged in to a pod "busybox"
  • I have a service called "nginx", connected with an nginx-pod that is up and running
  • From the busybox, I cannot ping the dns:25 packets transmitted, 0 packets received, 100% packet loss
  • If I try "nslookup nginx" I get:

    Server:    10.96.0.10
    Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
    
    Name:      nginx
    Address 1: 10.104.126.131 nginx.default.svc.cluster.local
  • Also I Changed changed the config-File on the busybox-Pod manually, to make the name resolution without FQDN:

    / # cat /etc/resolv.conf
    nameserver 10.96.0.10
    search default.svc.cluster.local svc.cluster.local cluster.local nginx XXX.XXX.de
    options ndots:5

    This doesn't seem like a good workaround for me, but at least it's working, and the nslookup is giving me the correct IP of the nginx-Service:

    user@controller:~$ kubectl get svc
    NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
    
    nginx                    ClusterIP   10.104.126.131   <none>        80/TCP         3d
    
  • Now, back to my networking issue: There doesn't seem to be the correct network interface on the pod that the connection to the service can be established:

    / # ifconfig
    eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
              inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
              inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:272 errors:0 dropped:0 overruns:0 frame:0
              TX packets:350 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:23830 (23.2 KiB)  TX bytes:32140 (31.3 KiB)
    
    lo        Link encap:Local Loopback
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:65536  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1
              RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

The busybox-pod has this IP: 172.17.0.2 while the DNS is in the subnet that starts with 10. IP of dns: 10.96.0.10.

  • Weavenet sometimes crashes on one worker-Node, but in the general case it is shown as "running" and I don't think that this can be the reason.

--> Can anybody see the underlaying configuration mistake in my networking? I'd be glad for hints! :)

General information:

Kubernetes/kubectl: v1.9.2

I used kubeadm to install.

uname -a: Linux controller 4.4.0-87-generic #110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

docker version:

    Client:
     Version:      1.13.1
     API version:  1.26
     Go version:   go1.6.2
     Git commit:   092cba3
     Built:        Thu Nov  2 20:40:23 2017
     OS/Arch:      linux/amd64

    Server:
     Version:      1.13.1
     API version:  1.26 (minimum version 1.12)
     Go version:   go1.6.2
     Git commit:   092cba3
     Built:        Thu Nov  2 20:40:23 2017
     OS/Arch:      linux/amd64
     Experimental: false

Weavenet: 2.2.0

-- Verena I.
kube-dns
kubernetes
nginx
weave

1 Answer

4/9/2018

None of the Service IPs (aka. Cluster IPs, aka. Portal IPs) respond to ping, so that's not a good test! A good test is to try the Service IP with an appropriate client, like nslookup for dns, curl for http, etc (and make sure you do that on the correct port too).

As you've seen from nslookup, the kube-dns Service is functioning properly.

There is a very good document about how these virtual IPs work in Kubernetes. Long story short: you will not find a network interface for this network, it is created via redirections in the kernel (configured by iptables).

-- Janos Lenart
Source: StackOverflow