’systemctl stop kube-proxy‘ will not clean the iptables

7/5/2016

kube version:1.22

  1. There is a svc running in k8s cluster which use nodeport 30003

  2. systemctl stop kube-proxy in minion A, ss -antpl | grep 30003 you will found the port 30003 is free

  3. In minion B,telnet $A_IP 30003 success( or nc $A_IP 30003)

  4. iptable -F -t nat in minion A
  5. Repeat step 3, telnet $A_IP 30003 failed

So I think should clean the iptables when kube-proxy abnormal exit?

-- workhardcc
kubernetes

1 Answer

7/5/2016

If you are running a cluster that uses kube-proxy for service IP to endpoint mapping, then it is expected that kube-proxy will be restarted shortly after it exits by a monitoring process (e.g. systemd, monit, supervisord, etc). In fact, in later versions of Kubernetes, kube-proxy runs as a privileged container and the kubelet ensures that it stays running. Since it is expected to be restarted quickly, cleaning up iptables would just cause them to be modified unnecessarily each time kube-proxy was restarted.

If you don't want kube-proxy to manage iptables for you, then you can decide not to run it in your cluster at all, or to manually clean up the iptables rules after you stop it.

-- Robert Bailey
Source: StackOverflow