Can you limit the instance group(s) a service or load-balancers attaches to using Kubernetes hosted on AWS via KOPS?

12/23/2019

We are running a Kubernetes cluster on AWS via KOPS.

Our cluster has multiple instance groups. See the docs describing this concept. Each of our instance groups are setup with their requirements on auto-scaling, using spot instances, and machine types. The goal is to attach both deployments and connected services/load-balancers to the instance groups.

Now for my deployments, this setup works perfectly. I can attach the deployment of the pods to the instance group using a nodeLabel. As such:

# in kops, edit ig <name>
...
  nodeLabels:
    kubernetes.io/role: <tag_ig_group>
...

# in kubernetes, in deployments.yaml
...
    spec:
      nodeSelector:
        kubernetes.io/role: <tag_ig_group>
...

The result is that any pods of the deployment gets assigned to nodes of the type <tag_ig_group>. (disclaimer: not sure if kubernetes.io/role is the 'best' label to use, but it works)

Now, my question is, can you also do this for load-balancers in Kubernetes, and attach them to instance groups as you do with deployments?


According to the docs you can attach existing load-balancers to instance groups. However, that is done within KOPS, and is the opposite of what I am looking for. That is because in our cluster we have services that are dynamic, meaning that each time a new service is created you would need to (manually) go to KOPS and assign this existing service to the instance group.

So, I was wondering if someone has already run into this problem and if there are any solutions for it.

Thanks!

-- Boris
amazon-ec2
amazon-web-services
kops
kubernetes

0 Answers