What I have:
I have created one Kubernetes cluster using single node Rancher 2.0 deployment. which has 3 etcd, control nodes & 2 worker nodes attached to cluster.
What I did:
I deployed one API gateway to this cluster & one express mydemoapi
service (no db) with 5 pods on 2 nodes on port 5000, which I don't want to expose publicly. So, I just mapped that service endpoint with service name in API gateway http:\\mydemoapi:5000
& it was accessible by gateway public endpoint.
Problem statement:
mydemoapi
service is served in random fashion, not in round robin, because default setting of kube-proxy
is random as per Rancher documentation Load balancing in Kubernetes
Partial success:
I created one ingress loadbalancer with Keep the existing hostname option
in Rancher rules with this URL mydemoapi.<namespace>.153.xx.xx.102.xip.io
& attached this service to ingress, it is served in round robin fashion, but having one problem. This service was using xip.io
with public ip of my worker node & exposed publicly.
Help needed:
I want to map my internal clusterIP service into gateway with internal access, so that it can be served to gateway internally in round robin fashion and hence to gateway public endpoint. I don't want to expose my service publicly without gateway.
Not sure which cloud you are running on, but if you are running in something like AWS you can set the following annotation to true
on your Service
definition:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
Other Cloud providers have similar solutions and some don't even have one. In that case, you will have to use a NodePort
service and redirect an external load balancer such as one with haproxy
or nginx
to forward traffic to that NodePort
Another option is to not use an Ingress
at all if you want to do round robin between your services is to change your kube-proxy
configs to use either the old namespace proxy mode or the more enhanced ipvs proxy mode.