I would like to configure custom DNS in CoreDNS (to bypass NAT loopback issue, meaning that within the network, IP are not resolved the same than outside the network).
I tried to modify ConfigMap for CoreDNS with a 'fake' domain just to test, but it does not work. I am using minik8s
Here the config file of config map coredns:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
loop
reload
loadbalance
}
consul.local:53 {
errors
cache 30
forward . 10.150.0.1
}
kind: ConfigMap
Then I try to resolve this address using busy box, but it does not work.
$kubectl exec -ti busybox -- nslookup test.consul.local
> nslookup: can't resolve 'test.consul.local'
command terminated with exit code 1
Even kubernetes DNS is failing
$ kubectl exec -ti busybox -- nslookup kubernetes.default
nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
I've reproduced your scenario and it works as intended.
Here I'll describe two different ways to use custom DNS on Kubernetes. The first is in Pod level. You can customize what DNS server your pod will use. This is useful in specific cases where you don't want to change this configuration for all pods.
To achieve this, you need to add some optional fields. To know more about it, please read this. Example:
kind: Pod
metadata:
name: busybox-custom
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
dnsPolicy: "None"
dnsConfig:
nameservers:
- 8.8.8.8
searches:
- ns1.svc.cluster-domain.example
- my.dns.search.suffix
options:
- name: ndots
value: "2"
- name: edns0
restartPolicy: Always
$ kubectl exec -ti busybox-custom -- nslookup cnn.com
Server: 8.8.8.8
Address 1: 8.8.8.8 dns.google
Name: cnn.com
Address 1: 2a04:4e42::323
Address 2: 2a04:4e42:400::323
Address 3: 2a04:4e42:200::323
Address 4: 2a04:4e42:600::323
Address 5: 151.101.65.67
Address 6: 151.101.129.67
Address 7: 151.101.193.67
Address 8: 151.101.1.67
$ kubectl exec -ti busybox-custom -- nslookup kubernetes.default
Server: 8.8.8.8
Address 1: 8.8.8.8 dns.google
nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
As you can see, this method will create problem to resolve internal DNS names.
The second way to achieve that, is to change the DNS on a Cluster level. This is the way you choose and as you can see.
$ kubectl get cm coredns -n kube-system -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
As you can see, I don't have consul.local:53
entry.
Consul is a service networking solution to connect and secure services across any runtime platform and public or private cloud
This kind of setup is not common and I don't think you need to include this entry in your setup. This might be your issue and when I add this entry, I face the same issues you reported.
$ kubectl exec -ti busybox -- nslookup cnn.com
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: cnn.com
Address 1: 2a04:4e42:200::323
Address 2: 2a04:4e42:400::323
Address 3: 2a04:4e42::323
Address 4: 2a04:4e42:600::323
Address 5: 151.101.65.67
Address 6: 151.101.193.67
Address 7: 151.101.1.67
Address 8: 151.101.129.67
$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
Another main problem is that you are debugging DNS using the latest busybox image. I highly recommend you to avoid any version newer than 1.28 as it has come know problems regarding name resolution.
The best busybox image you can use to troubleshot DNS is 1.28 as Oleg Butuzov recommended on the comments.