For every service in k8s cluster, kubernetes do snat for request packets. The iptables rules are:
-A KUBE-SERVICES ! -s 10.254.0.0/16 -d 10.254.186.26/32 -p tcp -m comment --comment "policy-demo/nginx: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.254.186.26/32 -p tcp -m comment --comment "policy-demo/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-3VXIGVIYYFN7DHDA
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
It works well in most circumstances, but not networkpolicy. Caclico uses ipset to implement networkpolicy and the matched set only contains pod ip.
So when the service pod runs on node1, and access pod runs on node2. The networkpolicy will DROP the request because the src ip of the request is node2's ip or flannel.1 ip.
I think there might be a method to close snat for clusterip service.But I can't find it anywhere, could anyone help me?
Thank you very much!
The problem has been resolved.
I changed --cluster-cidr=10.254.0.0/16
for kube-proxy to --cluster-cidr=172.30.0.0/16
. And then it worked well.
The kube-proxy cluster-cidr needs to match the one used on the controller manager, also the one used by calico.