i'm running kubenetes on EKS, and everything worked fine until we needed more nodes, so i scaled up the node group. Now i noticed an issue where if the pod is not running on a node where coredns is running. It's not able to connect to anything because the dns resolution fails.
my specific case is my pod is trying to connect to aws-rds postgres database, and it works fine when that pod "just happens" to be on a node where coredns is deployed to.
the /etc/resolv.conf file for all my pods are identical. but nslookup fails on half of them.
kubectl exec -it dnsutils3 -n dev -- nslookup google.com
;; connection timed out; no servers could be reached
command terminated with exit code 1