Good way to whitelist ingress traffic for JHub on EKS (AWS kubernetes)?

10/2/2019

Context: I have a EKS cluster (EKS is AWS' managed kubernetes service). I deploy an application to this EKS cluster (JupyterHub) via helm. I have a VPN server. Users of my application (JupyterHub on EKS) must connect to the VPN server first before they access the application. I enforce this by removing the 0.0.0.0/0 "allow all" ingress rule on the elastic load balancer, and adding an ingress rule that allows traffic from the VPN server only. The elastic load balancer referenced above is created implicitly by the JupyterHub application that gets deployed to EKS via helm.

Problem: When I deploy changes to the running JuypyterHub application in EKS, sometimes [depending on the changes] the ELB gets deleted and re-created. This causes the security group associated with the ELB to also get re-created, along with the ingress rules. This is not ideal because it is easy to overlook this when deploying changes to JupyterHub/EKS, and a developer might forget to verify the security group rules are still present.

Question: Is there a more robust place I can enforce this ingress network rule (only allow traffic from VPN server) ?

Two thoughts I had, but are not ideal:

  • Use a NACL. This won't work really, because it adds a lot of overhead managing the CIDRs due to the fact NACL is stateful and operates at subnet level.
  • I thought to add my ingress rules to the security group associated with the EKS worker nodes instead, but this won't work due to the same problem. When you delpoy an update to Jupyterhub/EKS, and if the ELB gets replaced, a "allow all traffic" ingress rule is implicitly added to the EKS worker node security group (allowing all traffic from the ELB). This would override my ingress rule.
-- James Wierzba
amazon-web-services
aws-eks
jupyterhub
kubernetes
networking

1 Answer

10/2/2019

It sounds like you're using a LoadBalanced service for JupyterHub. A better way of handling ingress into your cluster would be to use a single ingress controller (like the nginx ingress controller) - deployed via a different helm chart.

Then, deploy JupyterHub's helm chart but use a custom value passed into the release with the --set parameter to tell it to use a ClusterIP service instead of LoadBalancer type. This way, changes to your JupyterHub release that might re-create the ClusterIP service won't matter - as you'll be using Ingress Rules for the Ingress Controller to manage ingress for JupyterHub instead now.

Use the ingress rule feature of the JupyterHub helm chart to configure ingress rules for your nginx ingress controller: see docs here

The LoadBalancer generated by the Nginx Ingress Controller will instead remain persistent/stable and you can define your Security Group ingress rules on that separately.

Effectively you're decoupling ingress into EKS apps from your JupyterHub app by using the Ingress Controller + ingress rules pattern of access.

On the subject of ingress and LoadBalancers

With EKS/Helm and load balanced services the default is to create an internet facing elastic load balancer.

There are some extra annotations you can add to the service definition that will instead create it as an internal facing LoadBalancer.

This might be preferable to you for your ingress controller (or anywhere else you want to use LoadBalancer services), as it doesn't immediately expose the app to the open internet. You mentioned you already have VPN access into your VPC network, so users can still VPN in, and then hit the LoadBalancer hostname.

I wrote up a guide a while back on installing the nginx ingress controller here. It talks about doing this with DigitalOcean Kubernetes, but is still relevant for EKS as its just a helm chart.

There is another post I did which talks about some extra configuration annotations you can add to your ingress controller service that automatically creates the specific port range ingress security group rules at the same time as the load balancer. (This is another option for you if you find each time it gets created you are having to manually update the ingress rules on the security group). See the post on customising Ingress Controller load balancer and port ranges for ingress here

The config values you want for auto-configuring your LoadBalancer ingress source ranges and setting it to internal can be set with:

  1. controller.service.loadBalancerSourceRanges
  2. service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

Hope that helps!

-- Shogan
Source: StackOverflow