I've built the cluster that has 3 worker nodes and an admin node. The worker nodes have kube-dns and calica deployed and set. Each machine has it's own external IP and associated DNS. I successfully run nginx-ingress-controller and its default 404-endpoint is accessible from the outside.
Now, the problem is that for some reason pods in the workers are not allowed to establish outbound connections. When I shell exec into the pod, I cannot curl, nor ping, even thus network seems to be configured well inside the pod. I tried to examine calico configuration, but it's quite messy and I don't know how it could be wrong. Are there any default calico/k8s settings that forbid outgoing connection from its nodes? Or maybe somebody faced similar issue?
I'll provide log outputs on-demand, as I'm unsure, what information would be precious in examining this issue.
Thanks for comments, after many hours of investigation, I finally found that the problem was wrongly configured kube-dns. When you deploy kube-dns, it automatically imports nameservers list from your machine /etc/resolv.conf. It works great, unless you have ubuntu with systemd-resolve DNS server installed (and it's installed by default). It works as a proxy DNS server active as address 127.0.0.53, and is inaccesible from inside pods. That's why DNS nameservers were inaccesible even after kube-dns was installed and active.
Workaround for this problem, that I used, is as following:
Check what is the nameserver used by your machine - for me it was in /run/systemd/resolve/resolv.conf
Create new ConfigMap to replace kube-dns's default one, and fill it as follows:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["Your nameserver address"]
Redeploy kube-dns. Your correct DNS should work now