EksCtl : Update node-definitions via cluster config file not working

5/11/2020

I am using eksctl to create our EKS cluster.

For the first run, it works out good, but if I want to upgrade the cluster-config later in the future, it's not working.

I have a cluster-config file with me, but any changes made to it are not reflect with update/upgrade command.

What am I missing?

Cluster.yaml :

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata: 
  name: supplier-service
  region: eu-central-1

vpc:
  subnets:
    public: 
      eu-central-1a: {id: subnet-1}
      eu-central-1b: {id: subnet-2}
      eu-central-1c: {id: subnet-2}

nodeGroups:
  - name: ng-1
    instanceType: t2.medium
    desiredCapacity: 3
    ssh: 
      allow: true
    securityGroups:
      withShared: true
      withLocal: true
      attachIDs: ['sg-1', 'sg-2']
    iam:
      withAddonPolicies:
        autoScaler: true

Now, if in the future, I would like to make change to instance.type or replicas, I have to destroy entire cluster and recreate...which becomes quite cumbersome.

How can I do in-place upgrades with clusters created by EksCtl? Thank you.

-- We are Borg
amazon-eks
amazon-vpc
amazon-web-services
eksctl
kubernetes

1 Answer

5/27/2020

To upgrade the cluster using eksctl:

  1. Upgrade the control plane version
  2. Upgrade coredns, kube-proxy and aws-node
  3. Upgrade the worker nodes

If you just want to update nodegroup and keep the same configuration, you can just change nodegroup names, e.g. append -v2 to the name. [0]

If you want to change the node group configuration 'instance type', you need to just create a new node group: eksctl create nodegroup --config-file=dev-cluster.yaml [1]

[0] https://eksctl.io/usage/cluster-upgrade/#updating-multiple-nodegroups-with-config-file

[1] https://eksctl.io/usage/managing-nodegroups/#creating-a-nodegroup-from-a-config-file

-- jmselmi
Source: StackOverflow