Azure aks cluster - additional disk storage for each agent

10/22/2019

Is it possible by terraform provision AKS cluster with agents that will contains additional storage attached in each VM?

Currently PV/PVC based on SMB protocol is a bit of joke and my plan is to use rook or glusterfs, but then I like to provision my cluster by terraform where each of my node will handle also proper amount of storage, instead creating separate regular nodes just to handle them.

Best Regards.

-- MrHetii
azure
azure-aks
glusterfs
kubernetes

2 Answers

11/18/2019

As using aks-engine just to have extra storage was a bit of wasting resources for me, I found finally a way to add extra storage for each aks node, entirely by terraform way:

resource "azurerm_subnet" "subnet" {
  name                = "aks-subnet-${var.name}"
  resource_group_name = "${azurerm_resource_group.rg.name}"
  # Uncomment this line if you use terraform < 0.12
  network_security_group_id = "${azurerm_network_security_group.sg.id}"
  address_prefix            = "10.1.0.0/24"
  virtual_network_name      = "${azurerm_virtual_network.network.name}"
}

resource "azurerm_kubernetes_cluster" "cluster" {
  ...
  agent_pool_profile {
    ...
    count           = "${var.kubernetes_agent_count}"
    vnet_subnet_id  = "${azurerm_subnet.subnet.id}"
  }
}

# Find all agent node ids by extracting it's name from subnet assigned to cluster:
data "azurerm_virtual_machine" "aks-node" {
  # This resource represent each node created for aks cluster.
  count = "${var.kubernetes_agent_count}"
  name  = distinct([for x in azurerm_subnet.subnet.ip_configurations : replace(element(split("/", x), 8), "/nic-/", "")])[count.index]

  resource_group_name = azurerm_kubernetes_cluster.cluster.node_resource_group
  depends_on = [
    azurerm_kubernetes_cluster.cluster
  ]
}

# Create disk resource in size of aks nodes:
resource "azurerm_managed_disk" "aks-extra-disk" {
  count                = "${var.kubernetes_agent_count}"
  name                 = "${azurerm_kubernetes_cluster.cluster.name}-disk-${count.index}"
  location             = "${azurerm_kubernetes_cluster.cluster.location}"
  resource_group_name  = "${azurerm_kubernetes_cluster.cluster.resource_group_name}"
  storage_account_type = "Standard_LRS"
  create_option        = "Empty"
  disk_size_gb         = 10
}

# Attach our disks to each agents:
resource "azurerm_virtual_machine_data_disk_attachment" "aks-disk-attachment" {
  count              = "${var.kubernetes_agent_count}"
  managed_disk_id    = "${azurerm_managed_disk.aks-extra-disk[count.index].id}"
  virtual_machine_id = "${data.azurerm_virtual_machine.aks-node[count.index].id}"
  lun                = "10"
  caching            = "ReadWrite"
}
-- MrHetii
Source: StackOverflow

10/23/2019

I'm afraid you cannot achieve that in AKS through Terraform. The AKS is a managed service so that you cannot do many personal actions in it.

According to your requirements, I would suggest you use the aks-engine which you can manage the cluster yourself, even the master node. You can use the property diskSizesGB in the agentPoolProfiles. The description here:

Describes an array of up to 4 attached disk sizes. Valid disk size values are between 1 and 1024.

More details in clusterdefinitions. You can also take a look at the example for the diskSizesGB here.

-- Charles Xu
Source: StackOverflow