My kubelet complains:
E1201 09:00:12.562610 28747 kubelet_network.go:365] Failed to ensure rule to drop packet marked by KUBE-MARK-DROP in filter chain KUBE-FIREWALL: error appending rule: exit status 1: iptables: No chain/target/match by that name.
This is usually happens when you forget to 'rkt run' with --net-host, but I have not.
export RKT_OPTS="--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \ --volume dns,kind=host,source=/etc/resolv.conf \ --mount volume=dns,target=/etc/resolv.conf --net=host"
The following confirms my kube-proxy (started by kubelet) is in the same namespace as the host that owns the iptables chains:
root@i8:/etc# d exec -it 738 readlink /proc/self/ns/net
net:[4026531963]
root@i8:/etc# readlink /proc/self/ns/net
net:[4026531963]
root@i8:/etc# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
738ed14ec802 quay.io/coreos/hyperkube:v1.4.6_coreos.0 "/hyperkube proxy --m" 44 minutes ago Up 44 minutes k8s_kube-proxy.c445d412_kube-proxy-192.168.101.128_kube-system_438e3d01f328e73a199c6c0ed1f92053_10197c34
The proxy similarly complains "No chain/target/match by that name".
I have also verified the iptables chain:
# Completed on Thu Dec 1 01:07:11 2016
# Generated by iptables-save v1.4.21 on Thu Dec 1 01:07:11 2016
*filter
:INPUT ACCEPT [4852:1419411]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [5612:5965118]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
This satisfies the complaint in the error message (I think) and matches the filter chain on a problem-free coreos worker (different machine I compared with).
The problem worker is Debian Jessie running docker 1.12.3 and rkt 1.18.0
Both the good worker and problem worker are running the same version of iptables, 1.4.21
KUBELET_VERSION=v1.4.6_coreos.0
The symptom is that kubernetes on the problem worker does not install any iptables rules, like KUBE-NODEPORTS, and so this worker cannot listen for NodePort services. I think it's because of the above.
The problem worker has no problem running pods that the Master Node schedules.
Pods on the problem worker are serving requests OK from a proxy running on a different (coreos) worker.
I'm using flannel for networking.
If anyone was wondering, I need to get kubernetes working on Debian (yeah, it's a long story)
What else can I do to isolate what seems to be kubelet not seeing the host's iptables?
After much fault isolation, I've found the cause and solution.
In my case, I'm running a custom kernel pkg (linux-image), which was missing several kernel modules related to iptables. So when kubelet tried to append iptables rules that contained a comment, it errored because xt_comment wasn't loaded.
These are the modules I was missing: ipt_REJECT, nf_conntrack_netlink, nf_reject_ipv4, sch_fq_codel (maybe not required), xt_comment, xt_mark, xt_recent, xt_statistic
To get a complete list of modules that I likely needed, I logged into a CoreOS kubernetes worker and looked at its lsmod
. Then just compared that list to my "problem" machine.
I had this issue on a gentoo box with a custom kernel configuration whilst running k8s using rancher's k3d 1.3.1. Rebuilding the kernel with all the sane iptables + xt_comment solved this issue for me.