I try to create a kubernetes cluster, namespace and secrets via terraform. The cluster is created, but the resources building upon the cluster fail to be created.
This is the error message thrown by terraform after creation of the kubernetes cluster, when the namespace is to be created:
azurerm_kubernetes_cluster_node_pool.mypool: Creation complete after 6m4s [id=/subscriptions/aaabcde1-abcd-abcd-abcd-aaaaaaabdce/resourcegroups/myrg/providers/Microsoft.ContainerService/managedClusters/my-aks/agentPools/win]
Error: Post https://my-aks-abcde123.hcp.australiaeast.azmk8s.io:443/api/v1/namespaces: dial tcp: lookup my-aks-abcde123.hcp.australiaeast.azmk8s.io on 10.128.10.5:53: no such host
on mytf.tf line 114, in resource "kubernetes_namespace" "my":
114: resource "kubernetes_namespace" "my" {
I can resolve this by manually authenticating against the kubernetes cluster via the command line and applying the outstanding terraform changes via another terraform apply
:
az aks get-credentials -g myrg -n my-aks --overwrite-existing
My attempt to automate this authentication step failed. I have tried with a local exec provisioner inside the definition of the kubernetes cluster, without success:
resource "azurerm_kubernetes_cluster" "myCluster" {
name = "my-aks"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "my-aks"
network_profile {
network_plugin = "azure"
}
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_B2s"
}
service_principal {
client_id = azuread_service_principal.tfapp.application_id
client_secret = azuread_service_principal_password.tfapp.value
}
tags = {
Environment = "demo"
}
windows_profile {
admin_username = "myself"
admin_password = random_string.password.result
}
provisioner "local-exec" {
command="az aks get-credentials -g myrg -n my-aks --overwrite-existing"
}
}
This is an example of a resource that fails to be created:
resource "kubernetes_namespace" "my" {
metadata {
name = "my-namespace"
}
}
Is there a way to fully automate the creation of my resources, including those that are based on the kubernetes cluster, without manual authentication?
In the documentation for Terraform AKS resource there is an example of creating an authenticated Kubernetes provider:
provider "kubernetes" {
host = "${azurerm_kubernetes_cluster.main.kube_config.0.host}"
username = "${azurerm_kubernetes_cluster.main.kube_config.0.username}"
password = "${azurerm_kubernetes_cluster.main.kube_config.0.password}"
client_certificate = "${base64decode(azurerm_kubernetes_cluster.main.kube_config.0.client_certificate)}"
client_key = "${base64decode(azurerm_kubernetes_cluster.main.kube_config.0.client_key)}"
cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.main.kube_config.0.cluster_ca_certificate)}"
}
Then you can create Kubernetes namespace or secret with Terraform.
For your requirements, I think you can separate the creation of the AKS cluster from the creation of the resources in the AKS cluster.
In the creation of the AKS cluster, you just need to put the provisioner local-exec
in the null_resource
like this:
resource "null_resource" "example" {
provisioner "local-exec" {
command="az aks get-credentials -g ${azurerm_resource_group.rg.name} -n my-aks --overwrite-existing"
}
}
When the AKS cluster creation is finished. Then you go to create your namespace through the Terraform again.
In this way, you do not need to manually authenticate. Just execute the Terraform code.