Kubernetes on Google Cloud - Access pod port without port-forwarding

4/2/2020

I have a Google Cloud Project with:

  1. Internal network.
  2. I also deployed Kubernetes using this internal network.
  3. I deployed a deployment with a service (no external IP).

Services:

NAME           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
redis-master   ClusterIP   10.0.0.213   <none>        6379/TCP   27s

Now, I also deployed another VM instance within the same internal network. I want that this VM will access the IP: 10.0.0.213 on port 6379. But it's not working.

I read here, that I need to port-forward it in order to make it possible. But I don't want to expose my kubernetes cluster credentials in this VM.

LoadBalacer will give me external IP, which will work within the internal network but will work also from the internet.

So, how to expose it just to the Google internal network?

-- No1Lives4Ever
gke-networking
google-cloud-networking
google-cloud-platform
kubernetes

1 Answer

4/3/2020

I guess what you need is an Internal Load Balancer. You can simply annotate the Service with cloud.google.com/load-balancer-type: "Internal". See the internal-load-balancing.

-- kitt
Source: StackOverflow