AWS EKS Cluster Auto scale

10/31/2019

I have a AWS EKS cluster 1.12 version for my applications, We have deployed 6 apps in the cluster everything is working fine, while creating nodes I have added an autoscaling node group which spans across availability zones with minimum 3 and max 6 nodes, so desired 3 nodes are running fine.

I have scenario like this: when some memory spike happens I need to get more nodes as I mentioned in auto scaling group max nodes, so at the time of cluster set up I didn't add Cluster auto scale. Can somebody please address following doubts

  1. As per AWS documentation cluster auto scale won't support if our node group in multiple AZs
  2. If at all we require to create multiple node groups as per the aws doc, how to mention min max nodes, is it like for entire cluster ?
  3. How can I achieve auto scale on memory metric since this won't come by default like cpu metric
-- Lakshmi Reddy
amazon-ec2
autoscaling
aws-eks
docker
kubernetes

1 Answer

10/31/2019

You should create one node group for every AZ. So if your cluster size is 6 nodes then create 2 instance node groups in one AZ each. You can also spread the pods across AZ for High Availability. If you look at cluster autoscaler documentation, it recommends:

Cluster autoscaler does not support Auto Scaling Groups which span multiple Availability Zones; instead you should use an Auto Scaling Group for each Availability Zone and enable the --balance-similar-node-groups feature. If you do use a single Auto Scaling Group that spans multiple Availability Zones you will find that AWS unexpectedly terminates nodes without them being drained because of the rebalancing feature.

I am assuming you want to scale the pods based on memory. For that you will have to use metric server or Prometheus and create a HPA which scaled based on memory. You can find a working example here.

-- Vishal Biyani
Source: StackOverflow