I have a kubernetes cluster where control planes are working in HA through keepalived's VIP configured among them(installed only on control nodes). Everything works as expected when firewall is disabled. all internal communications and nodePorts behave inconsistently when firewall is enabled. As a basic startup i have enabled certain mandatory ports which are required for kubernetes but this doesn't help for applications to communicate. Is there any way or add any firewall rules that helps me to access the nodePorts/application services?
Incase if i'm unclear, this is a small summary
Control Plane 1: 172.16.23.110
Control Plane 2: 172.16.23.111
Control Plane 3: 172.16.23.112
Keepalived VIP: 172.16.23.116
Worker Nodes : 172.16.23.120-23.125
I'm trying to access service through nodePort like https://172.16.23.116:30443
i have added below firewall case across all nodes(all nodes are centos7.6)
cat /etc/firewalld/zones/internal.xml
<rule>
<protocol value="vrrp" />
<accept />
</rule>
Kindly help
I am not sure, that I understood question correctly.
But according to the official documentation you need to have the following ports be opened:
Control-plane node(s):
Worker node(s):
Here are the examples of commands for Firewalld, taken from here:
Control-plane node(s):
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --add-masquerade --permanent
# only if you want NodePorts exposed on control plane IP as well
firewall-cmd --permanent --add-port=30000-32767/tcp
systemctl restart firewalld
Worker node(s):
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --permanent --add-port=8472/udp
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --add-masquerade --permanent
systemctl restart firewalld