Terraform to prevent forced updated of AWS EKS cluster

6/12/2020

I am using terraform aws eks registry module https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/12.1.0?tab=inputs

Today with a new change to TF configs (unrelated to EKS) I saw that my EKS worker nodes are going to be rebuilt due to AMI updates which I am trying to prevent.

  # module.kubernetes.module.eks-cluster.aws_launch_configuration.workers[0] must be replaced
+/- resource "aws_launch_configuration" "workers" {
      ~ arn                              = "arn:aws:autoscaling:us-east-2:555065427312:launchConfiguration:6c59fac6-5912-4079-8cc9-268a7f7fc98b:launchConfigurationName/edna-dev-eks-02020061119383942580000000b" -> (known after apply)
        associate_public_ip_address      = false
        ebs_optimized                    = true
        enable_monitoring                = true
        iam_instance_profile             = "edna-dev-eks20200611193836418800000007"
      ~ id                               = "edna-dev-eks-02020061119383942580000000b" -> (known after apply)
      ~ image_id                         = "ami-05fc7ae9bc84e5708" -> "ami-073f227b0cd9507f9" # forces replacement
        instance_type                    = "t3.medium"
      + key_name                         = (known after apply)
      ~ name                             = "edna-dev-eks-02020061119383942580000000b" -> (known after apply)
        name_prefix                      = "edna-dev-eks-0"
        security_groups                  = [
            "sg-09b14dfce82015a63",
        ]

The rebuild happens because EKS got updated version of the AMI for worker nodes of the cluster.

This is my EKS terraform config

###################################################################################
# EKS CLUSTER                                                                     #
#                                                                                 #
# This module contains configuration for EKS cluster running various applications #
###################################################################################

module "eks_label" {
  source      = "git::https://github.com/cloudposse/terraform-null-label.git?ref=master"
  namespace   = var.project
  environment = var.environment
  attributes  = [var.component]
  name        = "eks"
}

data "aws_eks_cluster" "cluster" {
  name = module.eks-cluster.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks-cluster.cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.9"
}

module "eks-cluster" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = module.eks_label.id
  cluster_version = "1.16"
  subnets         = var.subnets
  vpc_id          = var.vpc_id

  worker_groups = [
    {
      instance_type = var.cluster_node_type
      asg_max_size  = var.cluster_node_count
    }
  ]

  tags = var.tags
}

If I am trying to add lifecycle block in the module config

lifecycle {
    ignore_changes = [image_id]
}

I get error:

➜ terraform plan                                                                   

Error: Reserved block type name in module block

  on modules/kubernetes/main.tf line 45, in module "eks-cluster":
  45:   lifecycle {

The block type name "lifecycle" is reserved for use by Terraform in a future
version.

Any ideas?

-- DmitrySemenov
kubernetes
terraform
terraform-provider-aws

1 Answer

6/13/2020

What about trying to use the worker_ami_name_filter variable for terraform-aws-modules/eks/aws to specifically find only your current AMI?

For example:

module "eks-cluster" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = module.eks_label.id
  <...snip...>

  worker_ami_name_filter = "amazon-eks-node-1.16-v20200531"
}

You can use AWS web console or cli to map the AMI IDs to their names:

user@localhost:~$ aws ec2 describe-images --filters "Name=name,Values=amazon-eks-node-1.16*" --region us-east-2 --output json | jq '.Images[] | "\(.Name) \(.ImageId)"'
"amazon-eks-node-1.16-v20200423 ami-01782c0e32657accf"
"amazon-eks-node-1.16-v20200531 ami-05fc7ae9bc84e5708"
"amazon-eks-node-1.16-v20200609 ami-073f227b0cd9507f9"
"amazon-eks-node-1.16-v20200507 ami-0edc51bc2f03c9dc2"

But why are you trying to prevent the Auto Scaling Group from using a newer AMI? It will only apply the newer AMI to new nodes. It won't terminate existing nodes just to update them.

-- weichung.shaw
Source: StackOverflow