I'm creating three EKS clusters using this module. Everything works fine, just that when I try to add the configmap to the clusters using map_roles
, I face an issue.
My configuration looks like this which I have it within all three clusters
map_roles = [{
rolearn = "arn:aws:iam::${var.account_no}:role/argo-${var.environment}-${var.aws_region}"
username = "system:node:{{EC2PrivateDNSName}}"
groups = ["system:bootstrappers","system:nodes"]
},
{
rolearn = "arn:aws:sts::${var.account_no}:assumed-role/${var.assumed_role_1}"
username = "admin"
groups = ["system:masters","system:nodes","system:bootstrappers"]
},
{
rolearn = "arn:aws:sts::${var.account_no}:assumed-role/${var.assumed_role_2}"
username = "admin"
groups = ["system:masters","system:nodes","system:bootstrappers"]
}
]
The problem occurs while applying the template. It says
configmaps "aws-auth" already exists
When I studied the error further I realised that when applying the template, the module creates three configmap resources of the same name like these
resource "kubernetes_config_map" "aws_auth" {
# ...
}
resource "kubernetes_config_map" "aws_auth" {
# ...
}
resource "kubernetes_config_map" "aws_auth" {
# ...
}
This obviously is a problem. How do I fix this issue?
I've now tested my solution, which expands on @pst's "import aws-auth" answer and looks like this: break up the terraform apply
operation in your main eks project into 3 steps, which completely isolate the eks resources from the k8s resources, so that you may manage the aws-auth
ConfigMap from terraform workflows.
terraform apply -target=module.eks
terraform import kubernetes_config_map.aws-auth kube-system/aws-auth
if terraform state show kubernetes_config_map.aws-auth ; then
echo "aws-auth ConfigMap already exists in Remote Terraform State."
else
echo "aws-auth ConfigMap does not exist in Remote Terraform State. Importing..."
terraform import -var-file="${TFVARS_FILE}" kubernetes_config_map.aws-auth kube-system/aws-auth
fi
terraform apply
module.eks
. Most importantly, this call will not encounter the "aws-auth ConfigMap already exists" error since terraform is aware of its existence, and instead the proposed plan will update aws-auth
in place.NB: 1. Using a Makefile to wrap your terraform workflows makes this simple to implement. 2. Using a monolithic root module with -target is a little ugly, and as your use of the kubernetes provider grows, it makes sense to break out all the kubernetes terraform objects into a separate project. But the above gets the job done. 3. The separation of eks/k8s resources is best practice anyway, and is advised to prevent known race conditions between aws and k8s providers. Follow the trail from here for details.
The aws-auth configmap is created by EKS, when you create a managed node pool. It has the configuration required for nodes to register with the control plane. If you want to control the contents of the configmap with Terraform you have two options.
Either make sure you create the config map before the managed node pools resource. Or import the existing config map into the Terraform state manually.