Why is Kubernetes (AWS EKS) registering all workers to the Load Balancer?

3/2/2019

I want to know whether this is a default behaviour or something wrong with my setup.

I have 150 workers running on kubernetes.

I made a set of kubernetes workers (10 workers) run only a specific deployment using nodeSelector, I created a service (type=LoadBalancer) for it, when the Load Balancer was created all the 150 workers of Kubernetes were registered to the Load Balancer, while I was expecting to see only this set of workers (10 workers) of this deployment/service.

It behaved the same with alb-ingress-controller and AWS NLB

kind: Service
apiVersion: v1
metadata:
  name: my-service
spec:
  selector:
    app: my-app
  ports:
  - port: 8080
  type: LoadBalancer

and the deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  selector:
    matchLabels:
      app: my-app
  replicas: 10
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: master-api
        image: private/my-app:prod
        resources:
          requests:
            memory: 8000Mi
        ports:
        - containerPort: 8080
      nodeSelector:
        role: api 

I was already labeled 10 workers nodes with the label role=api the 10 run only pods of this deployment, and no other worker is running this service I also don't have another service or container using port 8080

-- Eltorrooo
amazon-eks
amazon-web-services
kubernetes

1 Answer

3/2/2019

So ALB controller actually doesn't check your labels etc. It is purely looking at the labels in your subnet. So, if your worker nodes are running inside the subnet with tag kubernetes.io/role/alb-ingress or something like that, all your worker nodes from it will be added to the load balancer.

I believe it is part of the auto-discovery thing docs

-- marcincuber
Source: StackOverflow