Kops interaction between vpc with internal configuration

10/7/2019

My goal is deploy cluster in AWS in some private VPC(let say A) and provide peering connection with other VPC (let say B). The main idea that access to k8s should be available only from VPC B. As far as I understand for it I should create a private topology cluster with internal load balancer. Like this:

kops create cluster --name=$cluster --state=$state --zones=$zones --topology=private --networking=weave --api-loadbalancer-type=internal

But unfortunately kops put api load balancer in private subnet, so make it unreachable from other VPC. If I make it public

kops create cluster --name=$cluster --state=$state --zones=$zones --topology=private --networking=weave --api-loadbalancer-type=public

Kops create a internet visible api load balancer and I want to avoid this for secure. Have anyone know is it possible to implement such solution via kops?

-- Klimov Peter
amazon-web-services
kops
kubernetes

1 Answer

10/7/2019

The load balancer directs traffic to the pods you are running inside the cluster. So if the pods need to be exposed to external traffic what you are doing works.

If you want to restrict access to the cluster api which is used for cluster management please use the --admin-access flag this will restrict API access to a provided CIDR. If not set, access will not be restricted by IP. (default [0.0.0.0/0])

-- Colwin
Source: StackOverflow