Can't access EKS api server endpoint within VPC when private access is enabled

4/4/2019

I have set up EKS cluser with "private access" enabled and set up one instance in the same VPC to communicate with EKS. The issue is if I enable to the "public access", I can access the api endpoint. But if I disable the public access and enable the private access, I can't access api endpoints.

When private access is enabled:

kubectl get svc
Unable to connect to the server: dial tcp: lookup randomstring.region.eks.amazonaws.com on 127.0.0.53:53: no such host

When public access is enabled:

kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   172.20.0.1   <none>        443/TCP   57m
-- Nitesh
amazon-eks
amazon-vpc
amazon-web-services
eks
kubernetes

1 Answer

4/4/2019

I had to enable enableDnsHostnames and enableDnsSupport for my VPC.

When enabling the private access of a cluster, EKS creates a private hosted zone and associates with the same VPC. It is managed by AWS itself and you can't view it in your aws account. So, this private hosted zone to work properly, your VPC must have enableDnsHostnames and enableDnsSupport set to true.

Note: Wait for a while for changes to be reflected(about 5 minutes).

-- Nitesh
Source: StackOverflow