In my pods I cannot reach external hosts. In my case this would be https://login.microsoftonline.com
.
I've been following the debugging DNS problems section on https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/, but the lack of knowledge about Kubernetes hinders me to apply the instructions given.
doing a local lookup works fine:
microk8s kubectl exec -i -t dnsutils -- nslookup kubernetes.default
Server: 10.152.183.10
Address: 10.152.183.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.152.183.1
However, trying to reach any external domain fails:
microk8s kubectl exec -i -t dnsutils -- nslookup stackoverflow.com
Server: 10.152.183.10
Address: 10.152.183.10#53
** server can't find stackoverflow.com.internal-domain.com: SERVFAIL
command terminated with exit code 1
The known issues section has the following paragraph:
Some Linux distributions (e.g. Ubuntu) use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet's --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.
Given, that the microk8s instance is running on Ubuntu, this might be worth investigating, but I have no idea where and how to apply that --resolv-conf
flag.
I am grateful for any hints on how I can track down this issue, since DNS including nslookup, traceroute et al is working flawlessly on the host system.
Update /etc/resolv.conf
nameserver 127.0.0.53
options edns0 trust-ad
search internal-domain.com
And that is the /etc/resolv.conf from within the dnsutils pod:
search default.svc.cluster.local svc.cluster.local cluster.local internal-domain.com
nameserver 10.152.183.10
options ndots:5
configMap:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
log . {
class error
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
loop
reload
loadbalance
}
In the end I could not figure out, what the reason was for this behaviour, so I did a full reset of the node:
microk8s reset
sudo snap remove microk8s
sudo snap install microk8s --classic --channel=1.19
Followed by the remaining instructions to configure secrets et al.
Change forward . 8.8.8.8 8.8.4.4 to forward . /etc/resolv.conf