How to gracefully drain a node in EKS?

8/13/2018

Sometimes we need to drain nodes in Kubernetes. When I manually set up a k8s cluster, I can drain the specific node then terminate that machine. While in EKS, nodes are under auto scaling group, which means I can't terminate a specific instance(node). If I manually terminate a instance, another instance(node) will be automatically added into eks cluster.

So is there any suggested method to drain a node in EKS?

-- Shengxin Zhang
amazon-web-services
autoscaling
docker
kubernetes

1 Answer

1/17/2019

These steps should work:

1.) kubectl get nodes

2.) kubectl cordon <node name>

3.) kubectl drain <node name> --ignore-daemonsets

4.) aws autoscaling terminate-instance-in-auto-scaling-group --instance-id <instance-id> --should-decrement-desired-capacity

For step 3, you might need to consider using this instead:

kubectl drain <node name> --ignore-daemonsets --delete-local-data

For AWS autoscaling group, if you have nodes span out to multiple zones, consider delete nodes in each zones instead of all nodes from a single zone.

After the execution of the above commands, check the autoscaling group's desired number. It should decrease automatically. If you are using terraform or other automation framework, don't forget to update your autoscaling group config in your infrastructure script.

-- Steve-Liang
Source: StackOverflow