This setup is running on an Amazon EKS cluster.
I am getting an error where the hostname on a pod does not resolve to the cluster ip.
$ curl -vvv myservice:10000
* Rebuilt URL to: myservice:10000/
* Hostname was NOT found in DNS cache
The env vars have the right service name, ip, and port.
$ env | grep MYSERVICE
MYSERVICE_PORT_10000_TCP_PORT=10000
MYSERVICE_PORT=tcp://172.xx.xx.36:10000
MYSERVICE_PORT_10000_TCP=tcp://172.xx.xx.36:10000
MYSERVICE_PORT_10000_TCP_PROTO=tcp
MYSERVICE_SERVICE_PORT=10000
MYSERVICE_PORT_10000_TCP_ADDR=172.xx.xx.36
MYSERVICE_SERVICE_HOST=172.xx.xx.36
MYSERVICE_SERVICE_PORT_MYSERVICE=10000
I can curl the cluster ip/port and get the desired response.
/etc/resolv.conf looks like
$ cat /etc/resolv.conf
nameserver 172.20.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal
options ndots:5
Is a step being skipped by the container to load the hostname + service info?
I created an ingress rule for all traffic throughout my worker-node security group and it started working. It looks like there was an issue with containers on a different host than the host that had the kube-dns pods. There is probably a better solution but as of now this has resolved my issue.
EDIT: The previous answer did not resolve my issue. The problem ended up being that two out of three nodes had the wrong cluster ip in /etc/systemd/system/kubelet.service. After resolving that all the pods were able to resolve the DNS. It was temporarily fixed before because the pod coincidentally spun up on the single working node.