Accessing GCP Internal Load Balancer from another region

4/20/2019

I need to access an internal application running on GKE Nginx Ingress service riding on Internal Load Balancer, from another GCP region.

I am fully aware that it is not possible using direct Google networking and it is a huge limitation (GCP Feature Request).

Internal Load Balancer can be accessed perfectly well via VPN tunnel from AWS, but I am not sure that creating such a tunnel between GCP regions under the same network is a good idea.

Workarounds are welcomed!

-- Miro
google-cloud-internal-load-balancer
google-kubernetes-engine
kubernetes

3 Answers

8/21/2019

Another possible way is to implement the ngnix reverser proxy server on an compute engine in the same region as of GKE cluster, and use the internal IP of compute engine instance to communicate with the services of the GKE.

-- Parakh Jain
Source: StackOverflow

1/9/2020

In the release notes from GCP, it is stated that:

Global access is an optional parameter for internal LoadBalancer Services that allows clients from any region in your VPC to access the internal TCP/UDP Load Balancer IP address.

Global access is enabled per-Service using the following annotation:
networking.gke.io/internal-load-balancer-allow-global-access: "true".

I tried the above annotation with the following manifest:

apiVersion: v1
kind: Service
metadata:
  name: ilb-global
  annotations:
    cloud.google.com/load-balancer-type: "Internal"

    # brand new annotation
    networking.gke.io/internal-load-balancer-allow-global-access: "true"
  labels:
    app: hello
spec:
  type: LoadBalancer
  selector:
    app: hello
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP

I tried accessing the load balancer IP from a VM sitting in a different region but it didn't work directly.

But this helped me to make the internal load balancer global.

As we know internal load balancer is nothing but a forwarding rule, we can use gcloud command to enable global access.

  1. Firstly get the internal IP address of the Load Balancer using kubectl and save its IP like below:

    # COMMAND:
    kubectl get services/ilb-global
    
    # OUTPUT:
    NAME           TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
    ilb-global     LoadBalancer   10.0.12.12   10.123.4.5    80:32400/TCP   18m
    

    Note the value of "EXTERNAL-IP" or simply run the below command to make it even simpler:

    # COMMAND:
    kubectl get  service/ilb-global \
      -o jsonpath='{.status.loadBalancer.ingress[].ip}'
    
    # OUTPUT:
    10.123.4.5
    
  2. GCP gives a randomly generated ID to the forwarding rule created for this Load Balancer. If you have multiple forwarding rules, use the following command to figure out which one is the internal load balancer you just created:

    # COMMAND:
    gcloud compute forwarding-rules list | grep 10.123.4.5
    
    # OUTPUT
    NAME                              REGION       IP_ADDRESS      IP_PROTOCOL  TARGET
    a26cmodifiedb3f8252484ed9d0192    asia-south1  10.123.4.5      TCP          asia-south1/backendServices/a26cmodified44904b3f8252484ed9d019

    NOTE: If you not working on Linux or grep is not installed, simply run gcloud compute forwarding-rules list and manually look for the forwarding rule having the IP address we are looking for.

  3. Note the name of the forwarding-rule and run the following command to update the forwarding rule with --allow-global-access (remember adding beta, as it is still a beta feature):

    # COMMAND:
    gcloud beta compute forwarding-rules update a26cmodified904b3f8252484ed9d0192 \
    --region asia-south1 --allow-global-access
    
    # OUTPUT:
    Updated [https://www.googleapis.com/compute/beta/projects/PROJECT/regions/REGION/forwardingRules/a26hehemodifiedhehe490252484ed9d0192].

And it's done. Now you can access this internal IP (10.123.4.5) from any instance in any region (but the same VPC network).

-- Amit Yadav
Source: StackOverflow

4/26/2019

First of all, note that the only way to connect any GCP resource (in this case your GKE cluster) from an on premise location, it’s either through a Cloud Interconnect or VPN set up, which actually they must be in the same region and VPC to be able to communicate with each other.

Having said that, I see you won’t like to do that under the same VPC, therefore a workaround for your scenario could be:

-- Galo
Source: StackOverflow