Kubectl: Access kubernetes cluster using route53 private hosted zone

6/6/2018

I stared my kubernetes cluster on AWS EC2 with kops using a private hosted zone in route53. Now when I do something like kubectl get nodes, the cli says that it can't connect to api.kops.test.com as it is unable to resolve it. So I fixed this issue by manually adding api.kops.test.com and its corresponding public IP (got through record sets) mapping in /etc/hosts file.

I wanted to know if there is a cleaner way to do this (without modifying the system-wide /etc/hosts file), maybe programmatically or through the cli itself.

-- Punit Naik
amazon-route53
amazon-web-services
kops
kubectl
kubernetes

1 Answer

6/7/2018

Pragmatically speaking, I would add the public IP as a IP SAN to the master's x509 cert, and then just use the public IP in your kubeconfig. Either that, or the DNS record not in the private route53 zone.

You are in a situation where you purposefully made things private, so now they are.


Another option, depending on whether it would be worth the effort, is to use a VPN server in your VPC and then connect your machine to EC2 where the VPN connection can add the EC2 DNS servers to your machine's config as a side-effect of connecting. Our corporate Cisco AnyConnect client does something very similar to that.

-- mdaniel
Source: StackOverflow