I am creating a CoreDNS DNS server on Kubernetes that needs to listen for UDP traffic on port 53 using an AWS network load balancer. I would like that traffic to be proxied to a Kubernetes service using TCP.
My current service looks like this:
---
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: coredns
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: coredns
type: NodePort
ports:
- name: dns
port: 5353
targetPort: 5353
nodePort: 30053
protocol: UDP
- name: dns-tcp
port: 5053
targetPort: 5053
nodePort: 31053
protocol: TCP
- name: metrics
port: 9153
targetPort: 9153
protocol: TCP
The network load balancer speaks to the cluster on the specified node ports, but the UDP listener times out when requesting zone data from the server.
when I dig
for records, I am getting a timeout unless +tcp
is specified in the dig. The health checks from the load balancer to the port are returning healthy, and the TCP queries are returning as expected.
Ideally, my listener would accept both TCP and UDP traffic on port 53 at the load balancer and return either TCP or UDP traffic based on the initial protocol of the request.
Is there anything glaringly obvious I am missing as to why UDP traffic is not either making it to my cluster or returning a response?