How to issue letsencrypt certificate for k8s (AKS) using terraform resources?

8/14/2019

Summary

I am unable to issue a valid certificate for my terraform kubernetes cluster on azure aks. The domain and certificate is successfully created (cert is created according to crt.sh), however the certificate is not applied to my domain and my browser reports "Kubernetes Ingress Controller Fake Certificate" as the applied certificate.

The terraform files are converted to the best of my abilities from a working set of yaml files (that issues certificates just fine). See my terraform code here.

UPDATE! In the original question I was also unable to create certificates. This was fixed by using the "tls_cert_request" resource from here. The change is included in my updated code below.

Here a some things I have checked out and found NOT to be the issue

  • The number of issued certificates from acme letsencrypt is not above rate-limits for either staging or prod.
  • I get the same "Fake certificate" error using both staging or prod certificate server.

Here are some areas that I am currently investigating as potential sources for the error.

  • I do not see a terraform-equivalent of the letsencrypt yaml input "privateKeySecretRef" and consequently what the value of my deployment ingress "certmanager.k8s.io/cluster-issuer" should be.

If anyone have any other suggestions, I would really appreciate to hear them (as this has been bugging me for quite some time now)!

Certificate Resources

provider "acme" {
  server_url = var.context.cert_server
}

resource "tls_private_key" "reg_private_key" {
  algorithm = "RSA"
}

resource "acme_registration" "reg" {
  account_key_pem = tls_private_key.reg_private_key.private_key_pem
  email_address = var.context.email
}

resource "tls_private_key" "cert_private_key" {
  algorithm = "RSA"
}

resource "tls_cert_request" "req" {
  key_algorithm   = "RSA"
  private_key_pem = tls_private_key.cert_private_key.private_key_pem
  dns_names       = [var.context.domain_address]

  subject {
    common_name = var.context.domain_address
  }
}

resource "acme_certificate" "certificate" {
  account_key_pem = acme_registration.reg.account_key_pem
  certificate_request_pem = tls_cert_request.req.cert_request_pem

  dns_challenge {
    provider = "azure"
    config = {
      AZURE_CLIENT_ID = var.context.client_id
      AZURE_CLIENT_SECRET = var.context.client_secret
      AZURE_SUBSCRIPTION_ID = var.context.azure_subscription_id
      AZURE_TENANT_ID = var.context.azure_tenant_id
      AZURE_RESOURCE_GROUP = var.context.azure_dns_rg
    }
  }
}

Pypiserver Ingress Resource

resource "kubernetes_ingress" "pypi" {
  metadata {
    name = "pypi"
    namespace = kubernetes_namespace.pypi.metadata[0].name

    annotations = {
      "kubernetes.io/ingress.class" = "inet"
      "kubernetes.io/tls-acme" = "true"
      "certmanager.k8s.io/cluster-issuer" = "letsencrypt-prod"
      "ingress.kubernetes.io/ssl-redirect" = "true"
    }
  }

  spec {
    tls {
      hosts = [var.domain_address]
    }
    rule {
      host = var.domain_address

      http {
        path {
          path = "/"

          backend {
            service_name = kubernetes_service.pypi.metadata[0].name
            service_port = "http"
          }
        }
      }
    }
  }
}

Let me know if more info is required, and I will update my question text with whatever is missing. And lastly I will let the terraform code git repo stay up and serve as help for others.

-- Krande
azure-kubernetes
kubernetes
lets-encrypt
terraform
terraform-provider-azure

1 Answer

8/21/2019

The answer to my question was that I had to include a cert-manager to my cluster and as far as I can tell there are no native terraform resources to create it. I ended up using Helm for my ingress and cert manager.

The setup ended up a bit more complex than I initially imagined, and as it stands now it needs to be run twice. This is due to the kubeconfig not being updated (have to apply "set KUBECONFIG=.kubeconfig" before running "terraform apply" a second time). So it's not pretty, but it "works" as a minimum example to get your deployment up and running.

There definitively are ways of simplifying the pypi deployment part using native terraform resources, and there is probably an easy fix to the kubeconfig not being updated. But I have not had time to investigate further.

If anyone have tips for a more elegant, functional and (probably most of all) secure minimum terraform setup for a k8s cluster I would love to hear it!

Anyways, for those interested, the resulting terraform code can be found here

-- Krande
Source: StackOverflow