Using a public ECR image in local Kubernetes cluster in Terraform

2/17/2022

I've setup a very simple local kubernetes cluster for development purposes, and for that I aim to pull a docker image for my pods from ECR.

Here's the code

   terraform {
      required_providers {
        kubernetes = {
          source  = "hashicorp/kubernetes"
          version = ">= 2.0.0"
        }
      }
    }

    provider "kubernetes" {
        config_path = "~/.kube/config"
    } 

    resource "kubernetes_deployment" "test" {
      metadata {
        name      = "test-deployment"
        namespace = kubernetes_namespace.test.metadata.0.name
      }

      spec {
        replicas = 2
        selector {
          match_labels = {
            app = "MyTestApp"
          }
        }

        template {
          metadata {
            labels = {
              app = "MyTestApp"
            }
          }

          spec {
            container {
              image = "public ECR URL"  <--- this times out
              name  = "myTestPod"
    
              port {
                container_port = 4000
              }
            }
          }
        }
      }
    }

I've set that ECR repo to public and made sure that it's accessible. My challenge is that in a normal scenario you have to login to ECR in order to retrieve the image, and I do not know how to achieve that in Terraform. So on 'terraform apply', it times out and fails.

I read the documentation on aws_ecr_repository, aws_ecr_authorization_token,Terraform EKS module and local-exec, but none of them seem to have a solution for this.

Achieving this in a Gitlab pipeline is fairly easy, but how can one achieve this in Terraform? how can I pull an image from a public ECR repo for my local Kubernetes cluster?

-- Pouya Ataei
amazon-ecr
amazon-web-services
kubernetes
terraform

1 Answer

2/24/2022

After a while, I figured out the cleanest way to achieve this;

First retrieve your ECR authorization token data;

data "aws_ecr_authorization_token" "token" {
}

Second, create a secret for your kubernetes cluster**:

resource "kubernetes_secret" "docker" {
  metadata {
    name      = "docker-cfg"
    namespace = kubernetes_namespace.test.metadata.0.name
  }

  data = {
    ".dockerconfigjson" = jsonencode({
      auths = {
        "${data.aws_ecr_authorization_token.token.proxy_endpoint}" = {
          auth = "${data.aws_ecr_authorization_token.token.authorization_token}"
        }
      }
    })
  }

  type = "kubernetes.io/dockerconfigjson"
}

Bear in mind that the example in the docs base64 encodes the username and password. The exported attribute authorization_token does the same thing.

Third, once the secret is created, you can then have your pods use it as the image_pull_secrets:

resource "kubernetes_deployment" "test" {
  metadata {
    name      = "MyTestApp"
    namespace = kubernetes_namespace.test.metadata.0.name
  }
  spec {
    replicas = 2

    selector {
      match_labels = {
        app = "MyTestApp"
      }
    }

    template {
      metadata {
        labels = {
          app = "MyTestApp"
        }
      }


      spec {

        image_pull_secrets {
          name = "docker-cfg"
        }

        container {
          image             = "test-image-URL"
          name              = "test-image-name"
          image_pull_policy = "Always"


          port {
            container_port = 4000
          }
        }

      }
    }
  }
  depends_on = [
      kubernetes_secret.docker,
  ]
}

Gotcha: the token expires after 12 hours, so you should either write a bash script that updates your secret in the corresponding namespace, or you should write a Terraform provisioner that gets triggered every time the token expires.

I hope this was helpful.

-- Pouya Ataei
Source: StackOverflow