How to create a private AKS cluster in an existing VNET using Terraform

7/29/2021

I am trying to provision a private AKS cluster using terraform. I want to connect my private AKS cluster to an existing VNET that I have created using the Azure portal.

The Virtual network option is available in the Azure portal. Please find the below image.

enter image description here

However, the terraform documentation on azurerm_kubernetes_cluster has very limited information on how to achieve that.

Please find my main.tf below

resource "azurerm_kubernetes_cluster" "kubernetes_cluster" {                                                                
  name                    = var.cluster_name                                                                                
  location                = var.location                                                                                    
  resource_group_name     = var.resource_group_name                                                                         
  private_cluster_enabled = true                                                                                            
                                                                                                                            
  default_node_pool {                                                                                                       
    name           = "default"                                                                                              
    node_count     = var.node_count                                                                                         
    vm_size        = var.vm_size                                                                                            
    max_pods       = var.max_pods_count                                                                                     
  }                                                                                                                         
                                                                                                                            
  kube_dashboard {                                                                                                          
    enabled = true                                                                                                          
  }                                                                                                                         
                                                                                                                            
  network_profile {                                                                                                         
    network_plugin = "azure"   
  }                                                                                             
} 

Please note that the VNET and the cluster that is to be created share the same location and resource group.

Any help on how to provision a private AKS cluster to an existing VNET using Terraform would be much appreciated.

-- pnkjkmr469
azure
azure-aks
kubernetes
terraform
terraform-provider-azure

1 Answer

8/1/2021

I used an existing code from Github with some changes as we already have vnet so instead of resource block I have used data block to get the details of the existing Vnet and instead of using the default subnet I created a subnet for aks and other one for firewall.

terraform {
  required_version = ">= 0.14"
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = ">=2.50.0"
    }
  }
}

provider "azurerm" {
  features {}
}

#local vars

locals {
  environment     = "test"
  resource_group = "AKS-test"
  resource_group_location = "East US"
  name_prefix     = "private-aks"
  aks_node_prefix = ["10.3.1.0/24"]
  firewall_prefix = ["10.3.2.0/24"]
}

#Existing vnet with address space "10.3.0.0/16"
data "azurerm_virtual_network" "base" {
  name                = "existing-vnet"
  resource_group_name = "AKS-test"
}

#subnets

resource "azurerm_subnet" "aks" {
  name                 = "snet-${local.name_prefix}-${local.environment}"
  resource_group_name  = local.resource_group
  address_prefixes     = local.aks_node_prefix
  virtual_network_name = data.azurerm_virtual_network.base.name
}

resource "azurerm_subnet" "firewall" {
  name                 = "AzureFirewallSubnet"
  resource_group_name  = local.resource_group
  virtual_network_name = data.azurerm_virtual_network.base.name
  address_prefixes     = local.firewall_prefix
}

#user assigned identity

resource "azurerm_user_assigned_identity" "base" {
  resource_group_name = local.resource_group
  location            = local.resource_group_location
  name                = "mi-${local.name_prefix}-${local.environment}"
}

#role assignment

resource "azurerm_role_assignment" "base" {
  scope                = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/AKS-test"
  role_definition_name = "Network Contributor"
  principal_id         = azurerm_user_assigned_identity.base.principal_id
}

#route table

resource "azurerm_route_table" "base" {
  name                = "rt-${local.name_prefix}-${local.environment}"
  location            = data.azurerm_virtual_network.base.location
  resource_group_name = local.resource_group
}

#route 

resource "azurerm_route" "base" {
  name                   = "dg-${local.environment}"
  resource_group_name    = local.resource_group
  route_table_name       = azurerm_route_table.base.name
  address_prefix         = "0.0.0.0/0"
  next_hop_type          = "VirtualAppliance"
  next_hop_in_ip_address = azurerm_firewall.base.ip_configuration.0.private_ip_address
}

#route table association

resource "azurerm_subnet_route_table_association" "base" {
  subnet_id      = azurerm_subnet.aks.id
  route_table_id = azurerm_route_table.base.id
}

#firewall

resource "azurerm_public_ip" "base" {
  name                = "pip-firewall"
  location            = data.azurerm_virtual_network.base.location
  resource_group_name = local.resource_group
  allocation_method   = "Static"
  sku                 = "Standard"
}

resource "azurerm_firewall" "base" {
  name                = "fw-${local.name_prefix}-${local.environment}"
  location            = data.azurerm_virtual_network.base.location
  resource_group_name = local.resource_group

  ip_configuration {
    name                 = "ip-${local.name_prefix}-${local.environment}"
    subnet_id            = azurerm_subnet.firewall.id
    public_ip_address_id = azurerm_public_ip.base.id
  }
}

#kubernetes_cluster

resource "azurerm_kubernetes_cluster" "base" {
  name                    = "${local.name_prefix}-${local.environment}"
  location                = local.resource_group_location
  resource_group_name     = local.resource_group
  dns_prefix              = "dns-${local.name_prefix}-${local.environment}"
  private_cluster_enabled = true

  network_profile {
    network_plugin = "azure"
    outbound_type  = "userDefinedRouting"
  }

  default_node_pool {
    name           = "default"
    node_count     = 1
    vm_size        = "Standard_D2_v2"
    vnet_subnet_id = azurerm_subnet.aks.id
  }

  identity {
    type                      = "UserAssigned"
    user_assigned_identity_id = azurerm_user_assigned_identity.base.id
  }
  depends_on = [
      azurerm_route.base,
      azurerm_role_assignment.base
    ]
}

Reference: Github

Before Test:

enter image description here

Doing a terraform Plan on the above code:

enter image description here

After applying the code:

enter image description here

After the deployment :

enter image description here

-- AnsumanBal-MT
Source: StackOverflow