It's really annoying using Load Balancers because they cost money per hour. I don't have money per hour yet and am trying to avoid as much overhead as possible. That and you can't have a TCP Load Balancer with a timeout greater than 30 seconds...
https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
Internal load balancing makes your cluster's services accessible to applications running on the same network but outside of the cluster. For example, if you run a cluster alongside some Compute Engine VM instances in the same network and you would like your cluster-internal services to be available to the cluster-external instances, you need to configure one of your cluster's Service resources to add an internal load balancer.
This is exactly what I want to do, but with IPtables and a VM instead of the Internal Load balancer.
Without internal load balancing, you would need to set up external load balancers, create firewall rules to limit the access, and set up network routes to make the IP address of the application accessible outside of the cluster.
I can do all except the last part where it talks about network routes. Before reading this, I tried making a NodePort and got an internal IP address. Could not even ping it. I made sure I was not using a ClusterIP per the Kubernetes' documentation on the different types of Services.
Also note I have no intention of scaling my resources at this time so I will only have 1 node with 3 containers within it.
How do I route traffic to a NodePort service using IP tables? What IP address do I use if the one provided by NotePort doesn't work?
NodePort service in kubernetes will expose a port on each node within your cluster. The port exposed will then send traffic to your service, and the service will then send traffic to one of your pods. The below is taken from the kubernetes website.
If you set the 'type' field to 'NodePort', the Kubernetes master will allocate a port from a range specified by --service-node-port-range flag (default: 30000-32767), and each Node will proxy that port (the same port number on every Node) into your 'Service'.
There is no need for IP tables in this case. You will, however,need to make sure that you have firewall rules that allow traffic to your kubernetes nodes on the port exposed by the service.
The downside to this config is that if your node IPs change (due to autoscaling, nodes crashing or live migrations), you will need to update the target IP. Whether you use iptables or not, this setup will not automatically update the correct IP of your node(s) if it changes.