how to specify a interface for kube-proxy and kubelet in a node with multiple network interfaces?

8/30/2017

I've deploy a kubernetes environment in some servers with three network interfaces. Everything goes well except the iptables rules about the cluster ip of kube-apiserver.

As all servers have three interfaces with the subnet

10.0.41.0/24
10.0.42.0/24
192.168.247.0/24

I just picked up one randomly for the kube-apiserver, made the apiserver work with certificate, and open 10.0.41.4:6443 for https requests. So the apiserver is configured like this:

KUBE_API_ADDRESS="--insecure-bind-address=10.0.41.4 --advertise-address=10.0.41.4 --bind-address=10.0.41.4"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_API_ARGS="--authorization-mode=RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --experimental-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxsize=100 --audit-log-maxbackup=0 --audit-log-path=/home/kube/audit.log --event-ttl=1h --runtime-config=batch/v2alpha1=true"
...

And the kubelet and kube-proxy started like this

/usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://10.0.41.4:8080 --address=10.0.41.1 --hostname-override=10.0.41.1 --allow-privileged=true --cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local. --hairpin-mode promiscuous-bridge --serialize-image-pulls=false --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --node-ip=10.0.41.1
/usr/bin/kube-proxy --logtostderr=true --v=0 --bind-address=10.0.41.1 --hostname-override=10.0.41.1 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16

Then i cant connect 10.254.0.1:443 with curl sometimes, just because there will be some iptables nat rules with recent and random modules for all the network interfaces

Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts      bytes target     prot opt in     out     source               destination         
   0        0 KUBE-SEP-FA7AX7V3VYKS4DYK  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-FA7AX7V3VYKS4DYK side: source mask: 255.255.255.255
   0        0 KUBE-SEP-WRH6L36KL6VQULQ6  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-WRH6L36KL6VQULQ6 side: source mask: 255.255.255.255
   0        0 KUBE-SEP-NQ46EZO2HBXVI7ID  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-NQ46EZO2HBXVI7ID side: source mask: 255.255.255.255
   0        0 KUBE-SEP-FA7AX7V3VYKS4DYK  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ statistic mode random probability 0.33332999982
   0        0 KUBE-SEP-WRH6L36KL6VQULQ6  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ statistic mode random probability 0.50000000000
   0        0 KUBE-SEP-NQ46EZO2HBXVI7ID  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */

Chain KUBE-SEP-FA7AX7V3VYKS4DYK (2 references)
pkts      bytes target     prot opt in     out     source               destination         
   0        0 KUBE-MARK-MASQ  all  --  *      *       10.0.41.4            0.0.0.0/0            /* default/kubernetes:https */
   0        0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ recent: SET name: KUBE-SEP-FA7AX7V3VYKS4DYK side: source mask: 255.255.255.255 tcp to:10.0.41.4:6443

Chain KUBE-SEP-NQ46EZO2HBXVI7ID (2 references)
pkts      bytes target     prot opt in     out     source               destination         
   0        0 KUBE-MARK-MASQ  all  --  *      *       192.168.247.19       0.0.0.0/0            /* default/kubernetes:https */
   0        0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ recent: SET name: KUBE-SEP-NQ46EZO2HBXVI7ID side: source mask: 255.255.255.255 tcp to:192.168.247.19:6443

Chain KUBE-SEP-WRH6L36KL6VQULQ6 (2 references)
pkts      bytes target     prot opt in     out     source               destination         
   0        0 KUBE-MARK-MASQ  all  --  *      *       10.0.42.4            0.0.0.0/0            /* default/kubernetes:https */
   0        0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ recent: SET name: KUBE-SEP-WRH6L36KL6VQULQ6 side: source mask: 255.255.255.255 tcp to:10.0.42.4:6443

The 10.254.0.1:443 can only be connected when iptables turns it to 10.0.41.4:6443. Other services, i mean nginx server for test, can be connected all the time. Because the ip is provided with network plugin (calico), and it's unique.

Now my question, What can i do to just let kube-proxy or kubelet or something else just work on the interface with 10.0.41.0/24?

i've tried to delete iptables rules unwanted, but they will be restored immediately. Sadly doesn't work.

I also checked the parameters of kube* in this, and got nothing.

-- Sarshes
kubernetes

1 Answer

9/21/2017

Alright, i think i've got the reason. The problem was caused by the '--apiserver-count' parameter.

The kube-apiserver will collect more than one interfaces (not exactly 3, maybe 2) as endpoints for the service named 'kubernetes' when i set '--apiserver-count' to 3. And the kube-proxy will add iptables nat rules for every endpoint. Remove the '--apiserver-count' will fix it.

So the kube-proxy worked fine, and i think maybe it's a bug of kube-apiserver.

-- Sarshes
Source: StackOverflow