So i created a cluster containing 4 machines using this command
gcloud container clusters create "[cluster-name]" \
--machine-type "n1-standard-1" \
--image-type "COS" \
--disk-size "100" \
--num-nodes "4"
and i can see that it's creating 4 VM instances inside my compute engine. I then setup deployments pointing to one or more entry(ies) in my container registry and services with a single service exposing a public ip
all of this is working well, but it bothers me that all 4 VM instances it has created is having public ip(s), please do correct me if i am wrong, but to my understanding here's what happen behind the scene
Looking at this, i don't think i need a public ip on each of the VM instances created for the cluster.
I have been reading the documentation(s), although i think i might have missed something, but i can't seem to find the configuration arguments that will allow me to achieve this
Currently all GKE VMs get a public IP address, but they have firewall rules set up to block unauthorized network connections. Your Service or Ingress resources are still accessed through Load Balancer’s public IP address.
As of writing there's no way to prevent cluster nodes from getting public IP addresses.