Kubernetes cluster attempts endless gcr.io DNS lookups, swamping the router. What is wrong and how can I put a stop to it?

6/8/2016

I'm running a Kubernetes 1.2.0 cluster on four Raspberry Pi 2's with Hypriot OS (2015-11-15 stable build). The setup was built for demo purposes. They're networked through a switch, to which a consumer-grade router (IP 192.168.1.1) is also connected, running DD-WRT which is set up as a wireless bridge, DHCP server and DNS server (also local DNS so that the Raspi's are reachable by hostname). Install scripts and setup yamls can be found on Github.

The problem is that the Raspi's are generating an incredible amount of DNS lookups on UDP:53, to the point they threaten to overwhelm the router, which is showing 2600+ active IP connections, 1600 from the master node and ~300 from the worker nodes. The cluster isn't running any deployments, pods, services or whatever. Internal DNS (SkyDNS) isn't installed. I don't have a clue as to why all these lookups might be necessary, but they're fired off in rapid succession. With only 4 nodes, the router can still keep up (barely), but for the demo I'm planning to do on Friday, I will have to hook-up at least 4 more, which will probably overwhelm the router and bring down the cluster.

In order to solve the problem, I've attempted to find out what domain my cluster seems so desperate to resolve:

HypriotOS: root@rpi-node-21 in ~
$ tcpdump -vvv -s 0 -l -n port 53
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
10:29:39.724300 IP (tos 0x0, ttl 64, id 4189, offset 0, flags [DF], proto UDP (17), length 52)
    192.168.1.94.58760 > 192.168.1.1.53: [bad udp cksum 0x83e1 -> 0x3e07!] 32499+ A? gcr.io. (24)
10:29:39.724434 IP (tos 0x0, ttl 64, id 4190, offset 0, flags [DF], proto UDP (17), length 52)
    192.168.1.94.58760 > 192.168.1.1.53: [bad udp cksum 0x83e1 -> 0x076d!] 46450+ AAAA? gcr.io. (24)
10:29:39.725011 IP (tos 0x0, ttl 64, id 23734, offset 0, flags [DF], proto UDP (17), length 68)
    192.168.1.1.53 > 192.168.1.94.58760: [udp sum ok] 32499 q: A? gcr.io. 1/0/0 gcr.io. [10s] A 173.194.65.82 (40)
10:29:39.725226 IP (tos 0x0, ttl 64, id 23735, offset 0, flags [DF], proto UDP (17), length 80)
    192.168.1.1.53 > 192.168.1.94.58760: [udp sum ok] 46450 q: AAAA? gcr.io. 1/0/0 gcr.io. [10s] AAAA 2a00:1450:4013:c00::52 (52)
10:29:39.730163 IP (tos 0x0, ttl 64, id 4191, offset 0, flags [DF], proto UDP (17), length 52)
    192.168.1.94.46180 > 192.168.1.1.53: [bad udp cksum 0x83e1 -> 0xef5b!] 65218+ A? gcr.io. (24)

As you can see, the cluster is looking up gcr.io, which is resolved just fine at 173.194.65.82 and then immediately looks it up again (note the timestamps).

Does anybody have a clue as to what might be going on and, more importantly, how to put an end to it apart from shredding the Raspi's and starting a New-Zealand based dog-walking service? I will include some logs and I'm available quickly to respond to requests for more info. I really hope somebody can help me out, thanks in advance!

Julian

HypriotOS: root@rpi-master in ~
$ docker logs k8s-master
I0608 09:19:08.523757     769 server.go:137] Running kubelet in containerized mode (experimental)
W0608 09:19:39.449996     769 server.go:445] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead.
W0608 09:19:39.450301     769 server.go:406] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults.
I0608 09:19:39.451561     769 plugins.go:71] No cloud provider specified.
I0608 09:19:39.451704     769 server.go:312] Successfully initialized cloud provider: "" from the config file: ""
I0608 09:19:39.452446     769 manager.go:132] cAdvisor running in container: "/docker/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62"
I0608 09:19:41.022249     769 fs.go:109] Filesystem partitions: map[/dev/root:{mountpoint:/rootfs major:179 minor:2 fsType: blockSize:0}]
E0608 09:19:41.038167     769 machine.go:176] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no such file or directory
I0608 09:19:43.098937     769 manager.go:169] Machine: {NumCores:4 CpuFrequency:900000 MemoryCapacity:970452992 MachineID:822a063820bf4276a8c5b4da928a438c SystemUUID:07c0f9c7ac2242e2954579d53e00b836 BootID:3148f74f-555c-4df9-ab12-79e04a88e086 Filesystems:[{Device:/dev/root Capacity:14946500608 Type:vfs Inodes:3796576}] DiskMap:map[179:0:{Name:mmcblk0 Major:179 Minor:0 Size:16021192704 Scheduler:deadline}] NetworkDevices:[{Name:eth0 MacAddress:b8:27:eb:8b:3c:c6 Speed:100 Mtu:1500} {Name:flannel0 MacAddress: Speed:10 Mtu:1472}] Topology:[{Id:0 Memory:0 Cores:[{Id:0 Threads:[0] Caches:[]} {Id:1 Threads:[1] Caches:[]} {Id:2 Threads:[2] Caches:[]} {Id:3 Threads:[3] Caches:[]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I0608 09:19:43.109629     769 manager.go:175] Version: {KernelVersion:4.1.12-hypriotos-v7+ ContainerOsVersion:Debian GNU/Linux 8 (jessie) DockerVersion:1.9.0 CadvisorVersion: CadvisorRevision:}
I0608 09:19:43.118227     769 server.go:319] Using root directory: /var/lib/kubelet
I0608 09:19:43.119828     769 server.go:673] Adding manifest file: /etc/kubernetes/manifests-multi
I0608 09:19:43.120179     769 file.go:47] Watching path "/etc/kubernetes/manifests-multi"
I0608 09:19:43.120347     769 server.go:683] Watching apiserver
W0608 09:19:43.164980     769 kubelet.go:508] Hairpin mode set to "promiscuous-bridge" but configureCBR0 is false, falling back to "hairpin-veth"
I0608 09:19:43.165217     769 kubelet.go:276] Hairpin mode set to "hairpin-veth"
I0608 09:19:44.445117     769 manager.go:244] Setting dockerRoot to /var/lib/docker
I0608 09:19:44.452306     769 plugins.go:56] Registering credential provider: .dockercfg
I0608 09:19:44.458106     769 plugins.go:291] Loaded volume plugin "kubernetes.io/aws-ebs"
I0608 09:19:44.458441     769 plugins.go:291] Loaded volume plugin "kubernetes.io/empty-dir"
I0608 09:19:44.458994     769 plugins.go:291] Loaded volume plugin "kubernetes.io/gce-pd"
I0608 09:19:44.459312     769 plugins.go:291] Loaded volume plugin "kubernetes.io/git-repo"
I0608 09:19:44.459766     769 plugins.go:291] Loaded volume plugin "kubernetes.io/host-path"
I0608 09:19:44.460058     769 plugins.go:291] Loaded volume plugin "kubernetes.io/nfs"
I0608 09:19:44.460314     769 plugins.go:291] Loaded volume plugin "kubernetes.io/secret"
I0608 09:19:44.460872     769 plugins.go:291] Loaded volume plugin "kubernetes.io/iscsi"
I0608 09:19:44.461310     769 plugins.go:291] Loaded volume plugin "kubernetes.io/glusterfs"
I0608 09:19:44.461611     769 plugins.go:291] Loaded volume plugin "kubernetes.io/persistent-claim"
I0608 09:19:44.462352     769 plugins.go:291] Loaded volume plugin "kubernetes.io/rbd"
I0608 09:19:44.462801     769 plugins.go:291] Loaded volume plugin "kubernetes.io/cinder"
I0608 09:19:44.463297     769 plugins.go:291] Loaded volume plugin "kubernetes.io/cephfs"
I0608 09:19:44.463928     769 plugins.go:291] Loaded volume plugin "kubernetes.io/downward-api"
I0608 09:19:44.464562     769 plugins.go:291] Loaded volume plugin "kubernetes.io/fc"
I0608 09:19:44.465098     769 plugins.go:291] Loaded volume plugin "kubernetes.io/flocker"
I0608 09:19:44.465609     769 plugins.go:291] Loaded volume plugin "kubernetes.io/azure-file"
I0608 09:19:44.466192     769 plugins.go:291] Loaded volume plugin "kubernetes.io/configmap"
I0608 09:19:44.481512     769 server.go:632] Started kubelet
E0608 09:19:44.483696     769 kubelet.go:956] Image garbage collection failed: unable to find data for container /
I0608 09:19:44.483849     769 server.go:109] Starting to listen on 0.0.0.0:10250
I0608 09:19:44.484162     769 server.go:126] Starting to listen read-only on 0.0.0.0:10255
E0608 09:19:44.513219     769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping)
I0608 09:19:44.563938     769 container_manager_linux.go:207] Updating kernel flag: vm/overcommit_memory, expected value: 1, actual value: 0
I0608 09:19:44.564896     769 container_manager_linux.go:207] Updating kernel flag: kernel/panic, expected value: 10, actual value: 0
I0608 09:19:44.565542     769 container_manager_linux.go:207] Updating kernel flag: kernel/panic_on_oops, expected value: 1, actual value: 0
I0608 09:19:44.568361     769 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0608 09:19:44.568627     769 manager.go:123] Starting to sync pod status with apiserver
I0608 09:19:44.568820     769 kubelet.go:2356] Starting kubelet main sync loop.
I0608 09:19:44.568969     769 kubelet.go:2365] skipping pod synchronization - [container runtime is down]
I0608 09:19:45.499027     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:19:45.499529     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:19:45.506507     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:19:46.039350     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:19:46.039646     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:19:46.043880     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
E0608 09:19:46.498331     769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping)
I0608 09:19:46.966327     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:19:46.966641     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:19:46.970968     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:19:47.512787     769 factory.go:230] Registering Docker factory
I0608 09:19:47.576324     769 factory.go:97] Registering Raw factory
I0608 09:19:48.044110     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:19:48.044409     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:19:48.049325     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:19:49.132613     769 manager.go:1003] Started watching for new ooms in manager
I0608 09:19:49.154846     769 oomparser.go:182] oomparser using systemd
I0608 09:19:49.172850     769 manager.go:256] Starting recovery of all containers
I0608 09:19:49.529570     769 manager.go:261] Recovery completed
I0608 09:19:49.569951     769 kubelet.go:2365] skipping pod synchronization - [container runtime is down]
I0608 09:19:49.781660     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:19:49.782820     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:19:49.796120     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:19:53.112626     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:19:53.112966     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:19:53.117777     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:19:54.571235     769 kubelet.go:2388] SyncLoop (ADD, "file"): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)"
E0608 09:19:54.571618     769 kubelet.go:2307] error getting node: node '192.168.1.84' is not in cache
I0608 09:19:54.572268     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"e736ec8218e250651b39758f3bbde22d4cdbb343e4118530d5791e4218786970"}
W0608 09:19:54.586217     769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused
I0608 09:19:54.597285     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"277772303bb1fa1c72ebe496016d1a3e00e961d5935c126c5285c0af76fa8456"}
E0608 09:19:54.609676     769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused
I0608 09:19:54.678305     769 manager.go:1698] Need to restart pod infra container for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)" because it is not found
I0608 09:19:54.770520     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"a64055838d257678ba5178bc2589f66839971070c6735335682c80785e51c943"}
I0608 09:19:54.823445     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"33ee7433077053694ff60552c600a535307ccfd0d752a2339c5c739591098d2b"}
I0608 09:19:54.879917     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"1c51763f63dfa80f6bc634f662710b71bfa341c0c69009067e2c3ae4a8a1673e"}
I0608 09:19:54.926815     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"087f7e397a98370f3a201e39b49e875c96b3c8290993ed1fc4a42dc848b0680b"}
I0608 09:19:55.008764     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"5e6ab61a95df5120cec057e515ddb7679de385169b516b7f09d3ede4e9cd2f50"}
I0608 09:19:55.920613     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"390a981a905d603007fb3009953efa5bba54d26287eeff4c5cbc8983f039134f"}
E0608 09:19:56.521544     769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping)
I0608 09:19:57.315403     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"f818c0e9b622947a00cc8cc7ce719846c965bbe47a26c90bd7dcc6ec81c9ef0f"}
I0608 09:19:59.233783     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"defee550850fd55fc2ecb1a41fdd47129133d0b0b8f1576f8cff0c537022782a"}
I0608 09:19:59.830736     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:19:59.831073     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:19:59.837299     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:20:00.511849     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"79ea416c11adae72af1e454b07c5f00efcc6677c45a76d510cc0717dc7015806"}
W0608 09:20:00.518862     769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused
E0608 09:20:00.525216     769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused
I0608 09:20:00.615637     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"162a0ec1abd0a329ff4f0582a72f2c47b9e99a1fbcc02409861b397f78480d16"}
E0608 09:20:01.612801     769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused
E0608 09:20:02.672719     769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused
W0608 09:20:04.572065     769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused
E0608 09:20:06.527979     769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping)
I0608 09:20:07.154072     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:20:07.154551     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:20:07.166567     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:20:10.483245     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"79ea416c11adae72af1e454b07c5f00efcc6677c45a76d510cc0717dc7015806"}
W0608 09:20:10.542522     769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused
E0608 09:20:10.548165     769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused
I0608 09:20:11.954701     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"79a4cbedadc1a825bce592b0c4cde042ffea5aa65f7c4227c8aec379aa64012c"}
W0608 09:20:12.042905     769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused
E0608 09:20:12.044221     769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused
I0608 09:20:14.288508     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:20:14.288868     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:20:14.300563     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
W0608 09:20:14.574069     769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused
E0608 09:20:16.536424     769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping)
I0608 09:20:21.433294     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:20:21.433579     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:20:21.439670     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:20:23.007412     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"79a4cbedadc1a825bce592b0c4cde042ffea5aa65f7c4227c8aec379aa64012c"}
E0608 09:20:23.094738     769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused
W0608 09:20:23.094918     769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused
I0608 09:20:23.112488     769 manager.go:2047] Back-off 10s restarting failed container=controller-manager pod=k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)
E0608 09:20:23.113255     769 pod_workers.go:138] Error syncing pod 9391883ad78c50e752d5748347ef9aa2, skipping: failed to "StartContainer" for "controller-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=controller-manager pod=k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)"
I0608 09:20:24.463284     769 kubelet.go:2391] SyncLoop (UPDATE, "api"): "k8s-master-192.168.1.84_default(15a52b5d-2cb3-11e6-ae88-b827eb8b3cc6)"
I0608 09:20:24.497971     769 manager.go:2047] Back-off 10s restarting failed container=controller-manager pod=k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)
E0608 09:20:24.498876     769 pod_workers.go:138] Error syncing pod 9391883ad78c50e752d5748347ef9aa2, skipping: failed to "StartContainer" for "controller-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=controller-manager pod=k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)"
W0608 09:20:27.051713     769 request.go:627] Throttling request took 99.568025ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
W0608 09:20:27.251946     769 request.go:627] Throttling request took 169.564927ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.14561128dd6997d6
W0608 09:20:27.451762     769 request.go:627] Throttling request took 141.993996ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
W0608 09:20:27.651819     769 request.go:627] Throttling request took 175.348684ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
W0608 09:20:27.851906     769 request.go:627] Throttling request took 169.614146ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.14561128dd6997d6
W0608 09:20:28.051684     769 request.go:627] Throttling request took 155.040509ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
I0608 09:20:28.573729     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:20:28.574075     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:20:28.745103     769 kubelet.go:1150] Node 192.168.1.84 was previously registered
W0608 09:20:28.851791     769 request.go:627] Throttling request took 122.413785ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.14561128dd6997d6
W0608 09:20:29.051663     769 request.go:627] Throttling request took 157.66653ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
W0608 09:20:29.251789     769 request.go:627] Throttling request took 177.7883ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
W0608 09:20:29.451806     769 request.go:627] Throttling request took 174.880614ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/192.168.1.84.145611266d7f848a
W0608 09:20:29.651741     769 request.go:627] Throttling request took 147.397079ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/192.168.1.84.145611266d7f848a
W0608 09:20:29.851871     769 request.go:627] Throttling request took 164.236896ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
W0608 09:20:30.051664     769 request.go:627] Throttling request took 177.139919ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
W0608 09:20:30.251706     769 request.go:627] Throttling request took 176.659299ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.1456112f2f6934f8
W0608 09:20:30.451679     769 request.go:627] Throttling request took 159.788336ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.1456112f2f7d5ddc
W0608 09:20:30.651761     769 request.go:627] Throttling request took 154.810042ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/192.168.1.84.145611266d7f848a
W0608 09:20:30.851640     769 request.go:627] Throttling request took 155.878888ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
I0608 09:20:37.134464     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"246f201be0479d48a0a44c4d4f8a95126d73ac04146e3029739cfd1da7d1ee77"}
E0608 09:20:55.460305     769 fsHandler.go:106] failed to collect filesystem stats - du command failed on /rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62 with output stdout: 238752    /rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62
, stderr: du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/679/task/702/fdinfo/19': No such file or directory
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/679/task/737/fdinfo/19': No such file or directory
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/679/task/738/fd/19': No such file or directory
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/1116/task/1116/fd/3': No such file or directory
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/1116/task/1116/fdinfo/3': No such file or directory
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/1116/fd/4': No such file or directory
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/1116/fdinfo/4': No such file or directory
 - exit status 1
I0608 09:20:55.460602     769 fsHandler.go:116] `du` on following dirs took 2.515023345s: [/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62 /rootfs/var/lib/docker/containers/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62]
-- Juul
dns
docker
kubernetes
raspberry-pi2

1 Answer

6/9/2016

I've managed to errr... "solve" the problem by adding 173.194.65.82 gcr.io to /etc/hosts which at least prevents outgoing DNS lookups from swamping the router because the domain is resolved locally. I suppose this will do for my demo tomorrow because at least I will have a functioning cluster which is not hellbent on DDOS'ing my router.

It is horrendously ugly though, I almost short-circuited one of the Raspi's with the tears of sadness falling from my eyes. I'm still interested in fixing the underlying problem if anyone has suggestions!

-- Juul
Source: StackOverflow