As we can see in this documentation. A private cluster is accessible by the VMs (GCP compute instances) in the same subnet by default. Here is what is mentioned in the docs :
From other VMs in the cluster's VPC network:
Other VMs can use kubectl to communicate with the private endpoint
only if they are in the same region as the cluster
and their internal IP addresses are included in
the list of master authorized networks.
I have tested this:
How does this private cluster figure out which VMs to give access and which VMs to reject?
The Compute Engine instances (or nodes) in a private cluster are isolated from the internet and have access to the Master API server endpoint for authentication, that is publicly exposed in the Google-managed project. However, resources outside the VPC aren't, by default, allowed to reach said endpoint.
Master Authorized Networks are used to allow the GKE Master API available to the whitelisted external networks/addresses that want to authenticate against it. Is not related to disallow communication within the compute resources in the cluster VPC. For that, you can simply use VPC level firewall rules.
It is not controlled by private cluster.
It is controlled by the routing and firewall rules configured for the vpc's subnets. Even within the same vpc, you can disable communication between them by adding a rule.