Cannot access EKS endpoint when private acess is enabled within my VPC

7/19/2019

I've setup a VPC consisting of a public subnet and multiple private subnets, the public subnet hosts a OpenVpn access server through which I can access my instances running in private subnets, I've all the NAT and Internet Gateway running fine and I can access internet from instances running in private subnet over VPN.

Everything was running fine until I decided to run a EKS instances in one of my private subnet with "Public Access" feature disabled, I cannot reach to my EKS endpoint (K8 API service endpoint API) over VPN or from any instances running into my public/private subnets (i.e. using a Jump box).

I googled a lot and found that I've to enable enableDnsHostnames and enableDnsSupport with my VPC, but enabling these did not help. I also checked my Master node security group which allows an inbound traffic from anywhere i.e. 0.0.0.0/0 over port 443, so security group is not a concern.

However everything runs just fine if I turn on "Public Access" flag to Enabled but that fails purpose of creating K8 cluster in private subnet.

Can someone please point out where I'm mistaking, thanks in advance.

-- Apollo
amazon-eks
amazon-vpc
aws-eks
kubernetes

1 Answer

11/21/2019

Intro

If you are setting up an EKS Kubernetes cluster on AWS then you would probably want a cluster that is not accessible to the world, then you'll access it privately via a VPN. Considering all disclosed vulnerabilities this setup looks as a more secure design, and it allows you to isolate the Kubernetes control plane and worker nodes within your VPC, providing an additional layer of protection to harden clusters against malicious attack and accidental exposure.

You do that by toggling off Public Access while creating the cluster, however a problem with that is the automated DNS Resolution for EKS with private endpoint is still not supported as it is, for example with RDS private endpoints.

When creating your EKS cluster:

  • AWS does not allow you to change the DNS name of the endpoint.
  • AWS creates a managed private hosted zone for the endpoint DNS (not editable).

Solution N°1

One suggested solution is to create Route53 inbound and outbound endpoints as described in this official AWS blog post.

However, the problem with that is that every time you create a cluster you will need to add IPs to our local resolver and If your local infrastructure is maintained by someone else then it might take days to get that done.

Solution N°2

You could solve that problem by writing a small script that updates /etc/hosts with the IP and dns name of the EKS private endpoint. This is kind of a hack but works well.

Here’s how the eks-dns.sh script looks:

#!/bin/bash

#
# eg: bash ~/.aws/eks-dns.sh bb-dev-eks-BedMAWiB bb-dev-devops
#
clusterName=$1
awsProfile=$2

#
# Get EKS ip addrs
#
ips=`aws ec2 describe-network-interfaces --profile $awsProfile \
--filters Name=description,Values="Amazon EKS $clusterName" \
| grep "PrivateIpAddress\"" | cut -d ":" -f 2 |  sed 's/[*",]//g' | sed 's/^\s*//'| uniq`

echo "#-----------------------------------------------------------------------#"
echo "# EKS Private IP Addresses:                                              "
echo $ips
echo "#-----------------------------------------------------------------------#"
echo ""

#
# Get EKS API endpoint
#
endpoint=`aws eks describe-cluster --profile $awsProfile --name $clusterName \
| grep endpoint\" | cut -d ":" -f 3 | sed 's/[\/,"]//g'`

echo "#-----------------------------------------------------------------------#"
echo "# EKS Private Endpoint                                                   "
echo $endpoint
echo "#-----------------------------------------------------------------------#"
echo ""

IFS=$'\n'
#
# Create backup of /etc/hosts
#
sudo cp /etc/hosts /etc/hosts.backup.$(date +%Y-%m-%d)

#
# Clean old EKS endpoint entries from /etc/hots
#
if grep -q $endpoint /etc/hosts; then
  echo "Removing old EKS private endpoints from /etc/hosts"
  sudo sed -i "/$endpoint/d" /etc/hosts
fi

#
# Update /etc/hosts with EKS entry
#
for item in $ips
do
    echo "Adding EKS Private Endpoint IP Addresses"
    echo "$item $endpoint" | sudo tee -a /etc/hosts
done

Exec Example

╭─delivery at delivery-I7567 in ~ using ‹› 19-11-21 - 20:26:27
╰─○ bash ~/.aws/eks-dns.sh bb-dev-eks-BedMAWiB bb-dev-devops

Resulting /etc/hosts

╭─delivery at delivery-I7567 in ~ using ‹› 19-11-21 - 20:26:27
╰─○ cat /etc/hosts
127.0.0.1       localhost 

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

172.18.3.111 D4EB2912FDB14E8DAB358D471DD0DC5B.yl4.us-east-1.eks.amazonaws.com
172.18.1.207 D4EB2912FDB14E8DAB358D471DD0DC5B.yl4.us-east-1.eks.amazonaws.com
  • Ref Article: studytrails.com/devops/kubernetes/local-dns-resolution-for-eks-with-private-endpoint/

Important Consideration

Also as stated in the related Question: Can't access EKS api server endpoint within VPC when private access is enabled , your VPC must have enableDnsHostnames and enableDnsSupport set to true.

I had to enable enableDnsHostnames and enableDnsSupport for my VPC.

When enabling the private access of a cluster, EKS creates a private hosted zone and associates with the same VPC. It is managed by AWS itself and you can't view it in your aws account. So, this private hosted zone to work properly, your VPC must have enableDnsHostnames and enableDnsSupport set to true.

Note: Wait for a while for changes to be reflected(about 5 minutes).

-- Exequiel Barrirero
Source: StackOverflow