Can we setup a k8s bare matal server to run Bind DNS server (named) and have an access to it from the outside on port 53?

4/12/2020

I have setup a k8s cluster using 2 bare metal servers (1 master and 1 worker) using kubespray with default settings (kube_proxy_mode: iptables and dns_mode: coredns) and I would like to run a BIND DNS server inside to manage a couple of domain names.

I deployed with helm 3 an helloworld web app for testing. Everything works like a charm (HTTP, HTTPs, Let's Encrypt thought cert-manager).

kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:24:46Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}

kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
k8smaster   Ready    master   22d   v1.16.7
k8sslave    Ready    <none>   21d   v1.16.7

I deployed with an Helm 3 chart an image of my BIND DNS Server (named) in default namespace; with a service exposing the port 53 of the bind app container.

I have tested the DNS resolution with a pod and the bind service; it works well. Here is the test of the bind k8s service from the master node:

kubectl -n default get svc bind -o wide
NAME   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE    SELECTOR
bind   ClusterIP   10.233.31.255   <none>        53/TCP,53/UDP   4m5s   app=bind,release=bind

kubectl get endpoints bind
NAME   ENDPOINTS                                                        AGE
bind   10.233.75.239:53,10.233.93.245:53,10.233.75.239:53 + 1 more...   4m12s

export SERVICE_IP=`kubectl get services bind -o go-template='{{.spec.clusterIP}}{{"\n"}}'`
nslookup www.example.com ${SERVICE_IP}
Server:     10.233.31.255
Address:    10.233.31.255#53

Name:   www.example.com
Address: 176.31.XXX.XXX

So the bind DNS app is deployed and is working fine through the bind k8s service.

For the next step; I followed the https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ documentation to setup the Nginx Ingress Controller (both configmap and service) to handle tcp/udp requests on port 53 and to redirect them to the bind DNS app.

When I test the name resolution from an external computer it does not work:

nslookup www.example.com <IP of the k8s master>
;; connection timed out; no servers could be reached

I digg into k8s configuration, logs, etc. and I found a warning message in kube-proxy logs:

ps auxw | grep kube-proxy
root     19984  0.0  0.2 141160 41848 ?        Ssl  Mar26  19:39 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8smaster

journalctl --since "2 days ago" | grep kube-proxy
<NOTHING RETURNED>

KUBEPROXY_FIRST_POD=`kubectl get pods -n kube-system -l k8s-app=kube-proxy -o go-template='{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | head -n 1`
kubectl logs -n kube-system ${KUBEPROXY_FIRST_POD}

I0326 22:26:03.491900       1 node.go:135] Successfully retrieved node IP: 91.121.XXX.XXX
I0326 22:26:03.491957       1 server_others.go:150] Using iptables Proxier.
I0326 22:26:03.492453       1 server.go:529] Version: v1.16.7
I0326 22:26:03.493179       1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0326 22:26:03.493647       1 config.go:131] Starting endpoints config controller
I0326 22:26:03.493663       1 config.go:313] Starting service config controller
I0326 22:26:03.493669       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0326 22:26:03.493679       1 shared_informer.go:197] Waiting for caches to sync for service config
I0326 22:26:03.593986       1 shared_informer.go:204] Caches are synced for endpoints config 
I0326 22:26:03.593992       1 shared_informer.go:204] Caches are synced for service config 
E0411 17:02:48.113935       1 proxier.go:927] can't open "externalIP for ingress-nginx/ingress-nginx:bind-udp" (91.121.XXX.XXX:53/udp), skipping this externalIP: listen udp 91.121.XXX.XXX:53: bind: address already in use
E0411 17:02:48.119378       1 proxier.go:927] can't open "externalIP for ingress-nginx/ingress-nginx:bind-tcp" (91.121.XXX.XXX:53/tcp), skipping this externalIP: listen tcp 91.121.XXX.XXX:53: bind: address already in use

Then I look for who was already using the port 53...

netstat -lpnt | grep 53
tcp        0      0 0.0.0.0:5355            0.0.0.0:*               LISTEN      1682/systemd-resolv 
tcp        0      0 87.98.XXX.XXX:53        0.0.0.0:*               LISTEN      19984/kube-proxy    
tcp        0      0 169.254.25.10:53        0.0.0.0:*               LISTEN      14448/node-cache    
tcp6       0      0 :::9253                 :::*                    LISTEN      14448/node-cache    
tcp6       0      0 :::9353                 :::*                    LISTEN      14448/node-cache

A look on the proc 14448/node-cache:

cat /proc/14448/cmdline 
/node-cache-localip169.254.25.10-conf/etc/coredns/Corefile-upstreamsvccoredns

So coredns is already handling the port 53 which is normal cos it's the k8s internal DNS service.

In coredns documentation (https://github.com/coredns/coredns/blob/master/README.md) they talk about a -dns.port option to use a distinct port... but when I look into kubespray (which has 3 jinja templates https://github.com/kubernetes-sigs/kubespray/tree/release-2.12/roles/kubernetes-apps/ansible/templates for creating the coredns configmap, services etc. similar to https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns) everything is hardcoded with port 53.

So my question is : Is there a k8s cluster configuration/workaround so I can run my own DNS Server and exposed it to port 53?

Maybe?

  • Setup the coredns to use a a different port than 53 ? Seems hard and I'm really not sure this makes sense!
  • I can setup my bind k8s service to expose port 5353 and configure the nginx ingress controller to handle this 5353 port and redirect to the app 53 port. But this would require to setup iptables to route external DSN requests* received on port 53 to my bind k8s service on port 5353 ? What would be the iptables config (INPUT / PREROUTING or FORWARD)? Does this kind of network configuration would breakes coredns?

Regards,

Chris

-- Chris
coredns
kubernetes
kubespray
named
port

1 Answer

4/12/2020

I suppose Your nginx-ingress doesn't work as expected. You need Load Balancer provider, such as MetalLB, to Your bare metal k8s cluster to receive external connections on ports like 53. And You don't need nginx-ingress to use with bind, just change bind Service type from ClusterIP to LoadBalancer and ensure you got an external IP on this Service. Your helm chart manual may help to switch to LoadBalancer.

-- Alex Vorona
Source: StackOverflow