GCP Container Engine with no public IP VMs

8/18/2017

So i created a cluster containing 4 machines using this command

gcloud container clusters create "[cluster-name]" \
  --machine-type "n1-standard-1" \
  --image-type "COS" \
  --disk-size "100" \
  --num-nodes "4"

and i can see that it's creating 4 VM instances inside my compute engine. I then setup deployments pointing to one or more entry(ies) in my container registry and services with a single service exposing a public ip

all of this is working well, but it bothers me that all 4 VM instances it has created is having public ip(s), please do correct me if i am wrong, but to my understanding here's what happen behind the scene

  1. A container is created
  2. VM instances is created based on #1
  3. An instance group is created, with VM instances on #2 as members
  4. (Since i have one of the service exposing a public ip) a network load balancer is created pointing to the instance group on #3 or the VM instances on #2

Looking at this, i don't think i need a public ip on each of the VM instances created for the cluster.

I have been reading the documentation(s), although i think i might have missed something, but i can't seem to find the configuration arguments that will allow me to achieve this

-- littlechad
gcloud
google-kubernetes-engine
kubernetes

1 Answer

8/21/2017

Currently all GKE VMs get a public IP address, but they have firewall rules set up to block unauthorized network connections. Your Service or Ingress resources are still accessed through Load Balancer’s public IP address.

As of writing there's no way to prevent cluster nodes from getting public IP addresses.

-- AhmetB - Google
Source: StackOverflow