I have a UDP service I need to expose to the internet from an AWS EKS cluster. AWS load balancers (classic or NLB) don’t do UDP, so I’d like to use a NodePort
with Route53's multi-value to get UDP round robin load balancing to my nodes.
My nodes on AWS EKS don’t have an ExternalIP
assigned to them. While the EC2 instances the nodes run on have public IPs, these haven’t been assigned to the nodes when the cluster was created.
How can I assign the EC2 public IPs to my k8s nodes?
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
x.us-west-2.compute.internal Ready <none> 7d v1.10.3 <none> Amazon Linux 2 (2017.12) LTS Release Candidate 4.14.42-61.37.amzn2.x86_64 docker://17.6.2
x.us-west-2.compute.internal Ready <none> 7d v1.10.3 <none> Amazon Linux 2 (2017.12) LTS Release Candidate 4.14.42-61.37.amzn2.x86_64 docker://17.6.2
x.us-west-2.compute.internal Ready <none> 7d v1.10.3 <none> Amazon Linux 2 (2017.12) LTS Release Candidate 4.14.42-61.37.amzn2.x86_64 docker://17.6.2
I'm currently testing against a HTTP service for convenience, and here's what my test service looks like:
apiVersion: v1
kind: Service
metadata:
name: backend-api
labels:
app: backend-api
spec:
selector:
app: backend-api
type: NodePort
ports:
- name: back-http
port: 81
targetPort: 8000
protocol: TCP
externalIPs:
- x.x.x.x
- x.x.x.x
- x.x.x.x
For this example, my curl
requests never hit the HTTP service running on the nodes. My hunch is that is because the nodes don't have externalIP
s.
You can use the External IP controller for assigning IPs to the nodes. It is designed to work on the bare metal cluster, but I think it should work in your case also.
External IP Controller is a k8s application which is deployed on top of k8s cluster and which configures External IPs on k8s worker node(s) to provide IP connectivity.
Description:
External IP controller kubernetes application is running on one of the nodes (replicas=1).
- On start it pulls information about services from kube-api and brings up all External IPs on the specified interface (eth0 in our example above).
- It watches kube-api for updates in services with External IPs and:
- When new External IPs appear it brings them up.
- When service is removed it removes appropriate External IPs from the interface.
- Kubernetes provides fail-over for External IP controller. Since we have replicas set to 1, then we'll have only one instance running in a cluster to avoid IPs duplication. And when there's a problem with k8s node, External IP controller will be spawned on a new k8s worker node and bring External IPs up on that node.
Check out the Demo to see how it works.