EKS Node EC2 instance gets terminated

4/12/2019

I ran through these steps to update the instance type of an existing EKS cluster.

https://docs.aws.amazon.com/eks/latest/userguide/update-stack.html

One of the nodes was terminated and did not come back up. Now I'm stuck with 1 node and a low understanding of how to bring another node back up.

Is there any direction or documentation that I can follow to bring it back up?

https://codefresh.io/kubernetes-tutorial/recover-broken-kubernetes-cluster/

I looked through this and couldn't figure out if this was what I needed.

-- cdes
eks
kubernetes

1 Answer

4/12/2019

Using this getting started guide; scroll down to the section Step 3: Launch and Configure Amazon EKS Worker Nodes and follow the instructions.

The basic idea behind the nodes in EKS is to create nodes in Autoscaling Groups (ASGs); don't be deceived by the name as there are no autoscaling policies defined by the default deploy script, so the group is actually static. Each ASG represents a common machine type, but can scale to N number of instances. If you need to add new machines types, you can just use the given CloudFormation YAML and specify the correct machine type during the deployment process.

Once your ASG is up and you have the nodes running, you will need to register the nodes with the Kubernetes master. In EKS, this is done by adding the NodeRole created from the previous step into a Kubernetes ConfigMap. This will register the node with the master and you should be able to see the node coming up on kubectl get nodes.

I highly recommend you follow the "Getting started guide" linked above; it's an excellent intro to EKS and its powerful features.

Hope this helps!

-- Frank Yucheng Gu
Source: StackOverflow