EKS ELB: odd instances in the list

9/17/2019

I've configured application to run on 2 ec2 instances and k8s service type = LoadBalancer for this application (Selector:app=some-app). Also, I have 10+ instances running in EKS cluster. According to the service output - everything is ok:

Name:                     some-app
Namespace:                default
Labels:                   app=some-app
Annotations:              external-dns.alpha.kubernetes.io/hostname: some-domain
                          service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 3600
                          service.beta.kubernetes.io/aws-load-balancer-internal: true
Selector:                 app=some-app
Type:                     LoadBalancer
IP:                       172.20.206.150
LoadBalancer Ingress:     internal-blablabla.eu-west-1.elb.amazonaws.com
Port:                     default  80/TCP
TargetPort:               80/TCP
NodePort:                 default  30633/TCP
Endpoints:                10.30.21.238:80,10.30.22.38:80
Port:                     admin  80/TCP
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

But when I check AWS console I see that all instances are included (10+) into ELB. (if I use Application load balancer - only 2 instances are present) Is there any configuration to remove odd instances?

-- malyy
amazon-elb
eks
kubernetes

1 Answer

9/17/2019

Thats the default behaviour for the elb/nlb, once traffic hits the instances kube-proxy will redirect it to the instances with your pods running.

If you're using the alb ingress controller, then again its standard behaviour, it will only add the instances were your pods are running, skipping the iptables mumbo jumbo ;)

-- Fernando Battistella
Source: StackOverflow