I deploy a kubernetes cluster following the guide: https://blog.hypriot.com/post/setup-kubernetes-raspberry-pi-cluster/. It basically uses hypriotOS and kubernetes from the debian repository.
After the deployment, all the pods were running and no faults were shown. However, the dns server was not working properly on the worker node.
master
$ kubectl -n kube-system get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.96.0.10 <none> 53/UDP,53/TCP 34m
kubernetes-dashboard 10.103.97.112 <nodes> 80:30518/TCP 31m
# I installed the dnsutils to have the dig command
$ dig @10.96.0.10 || echo "FAIL"
# shows a valid response (note that we are not resolving anything)
worker
$ dig @10.96.0.10 || echo "FAIL"
....
FAIL
It turn out that the answer was in one of the comments from , but it was not clear that this was my issue.
As the author of the comment stated is due to the iptables policies from Docker versions > 1.13.
To solve it, execute the following on both nodes:
sudo iptables -A FORWARD -i cni0 -j ACCEPT
sudo iptables -A FORWARD -o cni0 -j ACCEPT