I have created an EKS cluster by following the getting started guide by AWS with k8s version 1.11. I have not changed any configs as such for kube-dns. If I create a service let's say myservice, I would like to access it from some other ec2 instance which is not part of this eks cluster but it is in same VPC. Basically, I want this DNS to work as my DNS server for instances outside the cluster as well. How will I be able to do that?
I have seen that the kube-dns service gets a cluster IP but doesn't get an external IP, is that necessary for me to be able to access it from outside the cluster?
This is the current response :
[ec2-user@ip-10-0-0-149 ~]$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 4d
My VPC subnet is 10.0.0.0/16
I am trying to reach this 172.20.0.10 IP from other instances in my VPC and I am not able to, which I think is expected because my VPC is not aware of any subnet range that is 172.20.0.10. But then how do make this dns service accessible to all my instances in VPC?
The problem you are facing is mostly not related to DNS. As you said you cannot reach ClusterIP from your other instances because it is internal cluster network and it is unreachable from outside of Kubernetes.
Instead of going into the wrong direction I recommend you to make use of Nginx Ingress which allows you to create Nginx backed by AWS Load Balancer and expose your services on that Nginx.
You can further integrate your Ingresses with External-DNS addon which will allow you to dynamically create DNS records in Route 53.
This will take some time to configure but this is the Kubernetes way.