I remember DNS records being cached locally on various linux distros in the past, but this appears to have changed over the years (DNS caching in linux).
Within our environment (non-K8S) we found a noticeable delay (1-2ms) for each request due to DNS lookups because of this.
I also noticed there is no local DNS cache within K8S by default (https://github.com/kubernetes/kubernetes/issues/45363) and the DNS cache within CoreOS is also disabled by default (https://coreos.com/os/docs/latest/configuring-dns.html).
Given we're considering migrating towards K8S I was wondering; why is this not enabled for Kubernetes in particular?
My only theory is within kube-dns records are updated pre-emptively to ensure high-availability; but I'm not sure if K8S actually does that?
As a workaround, if I were to run dnsmasq on every node, would I break things? I noticed there have been attempts to make that setup the default within K8S, but those attempts/PR's appear to have gone stale and I'm not sure why.
Since Kubernetes 1.9+
was announced, CoreDNS had been included in kubeadm
, minikube
tools, etc. as a default DNS server and replaced former kube-dns
(which was based on dnsmasq
).
It was built as a fork of the Caddy web server and middleware chains in a way that each middleware component carries some DNS feature. If you already use kube-dns
, it is possible to launch CoreDNS
using this Link.
CoreDNS is already equipped with caching and forwarding features, assuming that caching runs as a separate component, and brokes dependency for using dnsmasq
.
. {
proxy . 8.8.8.8:53
cache example.org
}
There are a lot of plugins which you can use extending DNS functionality, like proxying requests, rewriting requests, doing health checks on endpoints, and publishing metrics into Prometheus.