I have limited IP's in my public facing VPC which basically means I cannot run the K8S worker nodes in that VPC since I would not have sufficient IP's to support all the pods. My requirement is to run the control plane in my public facing VPC and the worker nodes in a different VPC with a private IP range (192.168.X.X).
We use traefik for ingress and have deployed traefik as a DaemonSet. These pods are exposed using a Kubernetes service of type NLB. And we created a VPC endpoint on top of this NLB which allows us to access this traefik endpoint through our public facing VPC.
However, based on docs it looks like NLB support is still in alpha stage. I am curious what are my other options given the above constraints.
Usually, in Kubernetes cluster Pods are running in separate overlay subnet that should not overlap with existing IP subnets in VPC.
This functionality is provided by Kubernetes cluster networking solutions like Calico, Flannel, Weave, etc.
So, you only need to have enough IP address space to support all cluster nodes.
The main benefit of using NLB is to expose client IP address to pods, so if there are no such requirements, regular ELB would be good for most cases.