Our KOPS based Kubernetes cluster in AWS stopped creating external dns records in Route53 like: service-name-svc.testing.companydomain.com. Is there any way to check what flag is set for the dns-controller working within the cluster? Any other suggestions on how to troubleshoot it are welcomed!
With this in mind, the records like: service-name-svc.namespace.svc.cluster.local resolves fine.
Server: 100.32.0.10
Address 1: 100.32.0.10 kube-dns.kube-system.svc.cluster.local
Name: service-name-svc.namespace.svc.cluster.local
Address 1: 100.32.12.141 service-name-svc.namespace.svc.cluster.local
The typical way of creating route53 records in a kOps cluster is to deploy external-dns to the control plane nodes.
dns-controller may create route53 records too, and it does so for kube-apiserver and other system components. However, to use this for other nodes/services, you need to add specific annotations. See the dns-controller documentation.