How a private clusters in GKE authenticates the GCP compute instances (VMs) in the same subnet?

7/8/2019

As we can see in this documentation. A private cluster is accessible by the VMs (GCP compute instances) in the same subnet by default. Here is what is mentioned in the docs :

From other VMs in the cluster's VPC network: 
Other VMs can use kubectl to communicate with the private endpoint
only if they are in the same region as the cluster 
and their internal IP addresses are included in 
the list of master authorized networks.

I have tested this:

  • the cluster is accessible from VMs in the same subnetwork as the cluster
  • the cluster is not accessible from the VMs in different subnets.

How does this private cluster figure out which VMs to give access and which VMs to reject?

-- Amit Yadav
google-cloud-platform
google-kubernetes-engine
kubernetes

2 Answers

7/8/2019

The Compute Engine instances (or nodes) in a private cluster are isolated from the internet and have access to the Master API server endpoint for authentication, that is publicly exposed in the Google-managed project. However, resources outside the VPC aren't, by default, allowed to reach said endpoint.

Master Authorized Networks are used to allow the GKE Master API available to the whitelisted external networks/addresses that want to authenticate against it. Is not related to disallow communication within the compute resources in the cluster VPC. For that, you can simply use VPC level firewall rules.

-- yyyyahir
Source: StackOverflow

7/8/2019

It is not controlled by private cluster.

It is controlled by the routing and firewall rules configured for the vpc's subnets. Even within the same vpc, you can disable communication between them by adding a rule.

https://cloud.google.com/vpc/docs/vpc#affiliated_resources

-- Ankit Deshpande
Source: StackOverflow