Is there a way to not use GKE's standard load balancer?

4/8/2018

I'm trying to use Kubernetes to make configurations and deployments explicitly defined and I also like Kubernetes' pod scheduling mechanisms. There are (for now) just 2 apps running on 2 replicas on 3 nodes. But Google's Kubernetes Engine's load balancer is extremely expensive for a small app like ours (at least for the moment) at the same time I'm not willing to change to a single instance hosting solution on a container or deploying the app on Docker swarm etc.

Using node's IP seemed like a hack and I thought that it might expose some security issues inside the cluster. Therefore I configured a Træfik ingress and an ingress controller to overcome Google's expensive flat rate for load balancing but turns out an outward facing ingress spins up a standart load balancer or I'm missing something.

I hope I'm missing something since at this rates ($16 a month) I cannot rationalize using kubernetes from start up for this app.

Is there a way to use GKE without using Google's load balancer?

-- interlude
google-compute-engine
kubernetes
traefik

4 Answers

4/8/2018

You could deploy the nginx ingress controller using NodePort mode (e.g. if using the helm chart set controller.service.type to NodePort) and then load-balance amongst your instances using DNS. Just make sure you have static IPs for the nodes or you could even create a DaemonSet that somehow updates your DNS with each node's IP.

Traefik seems to support a similar configuration (e.g. through serviceType in its helm chart).

-- eug
Source: StackOverflow

1/21/2019

An Ingress is just a set of rules that tell the cluster how to route to your services, and a Service is another set of rules to reach and load-balance across a set of pods, based on the selector. A service can use 3 different routing types:

  • ClusterIP - this gives the service an IP that's only available inside the cluster which routes to the pods.
  • NodePort - this creates a ClusterIP, and then creates an externally reachable port on every single node in the cluster. Traffic to those ports routes to the internal service IP and then to the pods.
  • LoadBalancer - this creates a ClusterIP, then a NodePort, and then provisions a load balancer from a provider (if available like on GKE). Traffic hits the load balancer, then a port on one of the nodes, then the internal IP, then finally a pod.

These different types of services are not mutually exclusive but actually build on each other, and it explains why anything public must be using a NodePort. Think about it - how else would traffic reach your cluster? A cloud load balancer just directs requests to your nodes and points to one of the NodePort ports. If you don't want a GKE load balancer then you can already skip it and access those ports directly.

The downside is that the ports are limited between 30000-32767. If you need standard HTTP port 80/443 then you can't accomplish this with a Service and instead must specify the port directly in your Deployment. Use the hostPort setting to bind the containers directly to port 80 on the node:

containers:
  - name: yourapp
    image: yourimage
    ports:
      - name: http
        containerPort: 80
        hostPort: 80 ### this will bind to port 80 on the actual node

This might work for you and routes traffic directly to the container without any load-balancing, but if a node has problems or the app stops running on a node then it will be unavailable.

If you still want load-balancing then you can run a DaemonSet (so that it's available on every node) with Nginx (or any other proxy) exposed via hostPort and then that will route to your internal services. An easy way to run this is with the standard nginx-ingress package, but skip creating the LoadBalancer service for it and use the hostPort setting. The Helm chart can be configured for this:

https://github.com/helm/charts/tree/master/stable/nginx-ingress

-- Mani Gandham
Source: StackOverflow

4/8/2018

One option is to completely disable this feature on your GKE cluster. When creating the cluster (on console.cloud.google.com) under Add-ons disable HTTP load balancing. If you are using gcloud you can use gcloud beta container clusters create ... --disable-addons=HttpLoadBalancing.

Alternatively, you can also inhibit the GCP Load Balancer by adding an annotation to your Ingress resources, kubernetes.io/ingress.class=somerandomstring.

For newly created ingresses, you can put this in the yaml document:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: somerandomstring
...

If you want to do that for all of your Ingresses you can use this example snippet (be careful!):

kubectl get ingress --all-namespaces \
  -o jsonpath='{range .items[*]}{"kubectl annotate ingress -n "}{.metadata.namespace}{" "}{.metadata.name}{" kubernetes.io/ingress.class=somerandomstring\n"}{end}' \
  | sh -x

Now using Ingresses is pretty useful with Kubernetes, so I suggest you check out the nginx ingress controller and after deployment, annotate your Ingresses accordingly.

-- Janos Lenart
Source: StackOverflow

4/8/2018

If you specify the Ingress class as an annotation on the Ingress object

kubernetes.io/ingress.class: traefik

Traefik will pick it up while the Google Load Balancer will ignore it. There is also a bit of Traefik documentation on this part.

-- Timo Reimann
Source: StackOverflow