configmaps "aws-auth" already exists

11/7/2021

I'm creating three EKS clusters using this module. Everything works fine, just that when I try to add the configmap to the clusters using map_roles, I face an issue.

My configuration looks like this which I have it within all three clusters

map_roles = [{
    rolearn   = "arn:aws:iam::${var.account_no}:role/argo-${var.environment}-${var.aws_region}"
    username  = "system:node:{{EC2PrivateDNSName}}"
    groups    = ["system:bootstrappers","system:nodes"]
    },
    {
    rolearn   = "arn:aws:sts::${var.account_no}:assumed-role/${var.assumed_role_1}"
    username  = "admin"
    groups    = ["system:masters","system:nodes","system:bootstrappers"]
    },
    {
    rolearn  = "arn:aws:sts::${var.account_no}:assumed-role/${var.assumed_role_2}"
    username  = "admin"
    groups    = ["system:masters","system:nodes","system:bootstrappers"]
    }
]

The problem occurs while applying the template. It says

configmaps "aws-auth" already exists

When I studied the error further I realised that when applying the template, the module creates three configmap resources of the same name like these

 resource "kubernetes_config_map" "aws_auth" {
   # ...
 }
 resource "kubernetes_config_map" "aws_auth" {
   # ...
 }
 resource "kubernetes_config_map" "aws_auth" {
   # ...
 }

This obviously is a problem. How do I fix this issue?

-- Red Bottle
amazon-web-services
kubernetes
terraform

2 Answers

3/29/2022

I've now tested my solution, which expands on @pst's "import aws-auth" answer and looks like this: break up the terraform apply operation in your main eks project into 3 steps, which completely isolate the eks resources from the k8s resources, so that you may manage the aws-auth ConfigMap from terraform workflows.

  1. terraform apply -target=module.eks
    • This creates just the eks cluster and anything else the module creates.
    • The eks module design now guarantees this will NOT include anything from the kubernetes provider.
  2. terraform import kubernetes_config_map.aws-auth kube-system/aws-auth
    • This brings the aws-auth map, generated by the creation of the eks cluster in the previous step, into the remote terraform state.
    • This is only necessary when the map isn't already in the state, so we first check, with something like:
  if terraform state show kubernetes_config_map.aws-auth ; then
    echo "aws-auth ConfigMap already exists in Remote Terraform State."
  else
    echo "aws-auth ConfigMap does not exist in Remote Terraform State. Importing..."
    terraform import -var-file="${TFVARS_FILE}" kubernetes_config_map.aws-auth kube-system/aws-auth
  fi
  1. terraform apply
    • This is a "normal" apply which acts exactly like before, but will have nothing to do for module.eks. Most importantly, this call will not encounter the "aws-auth ConfigMap already exists" error since terraform is aware of its existence, and instead the proposed plan will update aws-auth in place.

NB: 1. Using a Makefile to wrap your terraform workflows makes this simple to implement. 2. Using a monolithic root module with -target is a little ugly, and as your use of the kubernetes provider grows, it makes sense to break out all the kubernetes terraform objects into a separate project. But the above gets the job done. 3. The separation of eks/k8s resources is best practice anyway, and is advised to prevent known race conditions between aws and k8s providers. Follow the trail from here for details.

-- timblaktu
Source: StackOverflow

12/9/2021

The aws-auth configmap is created by EKS, when you create a managed node pool. It has the configuration required for nodes to register with the control plane. If you want to control the contents of the configmap with Terraform you have two options.

Either make sure you create the config map before the managed node pools resource. Or import the existing config map into the Terraform state manually.

-- pst
Source: StackOverflow