GKE 1 load balancer with multiple apps on different assigned ports

4/14/2020

I want to be able to deploy several, single pod, apps and access them on a single IP address leaning on Kubernetes to assign the ports as they are when you use a NodePort service.

Is there a way to use NodePort with a load balancer?

Honestly, NodePort might work by itself, but GKE seems to block direct access to the nodes. There doesn't seem to be firewall controls like on their unmanaged VMs.

Here's a service if we need something to base an answer on. In this case, I want to deploy 10 these services which are different applications, on the same IP, each publicly accessible on a different port, each proxying port 80 of the nginx container.

---
apiVersion: v1
kind: Service
metadata:
  name: foo-svc
spec:
  selector:
    app: nginx
  ports:
    - name: foo
      protocol: TCP
      port: 80
  type: NodePort
-- Coder1
google-kubernetes-engine
kubernetes

2 Answers

5/4/2020

GKE seems to block direct access to the nodes.

GCP allows creating the FW rules that allow incoming traffic either to 'All Instances in the Network' or 'Specified Target Tags/Service Account' in your VPC Network.

Rules are persistent unless the opposite is specified under the organization's policies.

Node's external IP address can be checked at Cloud Console --> Compute Engine --> VM Instances or with kubectl get nodes -o wide.

I run GKE (managed k8s) and can access all my assets externally. I have opened all the needed ports in my setup. below is the quickest example.

Below you can find my setup:

$ kubectl get nodes -o wide 
NAME        AGE   VERSION           INTERNAL-IP   EXTERNAL-IP
gke--mnnv   43d   v1.14.10-gke.27   10.156.0.11   34.89.x.x   
gke--nw9v   43d   v1.14.10-gke.27   10.156.0.12   35.246.x.x

kubectl get svc -o wide
NAME     TYPE        CLUSTER-IP    EXTERNAL-IP  PORT(S)                         SELECTOR
knp-np   NodePort    10.0.11.113   <none>       8180:30008/TCP 8180:30009/TCP   app=server-go

$ curl 35.246.x.x:30008/test
Hello from ServerGo. You requested: /test 

That is why it looks like a bunch of NodePort type Services would be sufficient (each one serves requests for particular selector)

If for some reason it's not possible to set up the FW rules to allow traffic directly to your Nodes it's possible to configure GCP TCP LoadBalancer.

Cloud Console --> Network Services --> Load Balancing --> Create LB --> TCP Load Balancing.

There you can select your GKE Nodes (or pool of nodes) as a 'Backend' and specify all the needed ports for the 'Frontend'. For the Frontend you can Reserve Static IP right during the configuration and specify 'Port' range as two port numbers separated by a dash (assuming you have multiple ports to be forwarded to your node pool). Additionally, you can create multiple 'Frontends' if needed.

I hope that helps.

-- Nick
Source: StackOverflow

4/14/2020

Is there a way to use NodePort with a load balancer?

Kubernetes LoadBalancer type service builds on top of NodePort. So internally LoadBalancer uses NodePort meaning when a loadBalancer type service is created it automatically maps to the NodePort. Although it's tricky but possible to create NodePort type service and manually configure the Google provided loadbalancer to point to NodePorts.

-- Arghya Sadhu
Source: StackOverflow