After a few days of testing Azure aks, I find myself in a situation where existing aks instances don't clean up when I delete the parent resource group (or with az aks delete) and I am also unable to create new aks instances. Has anyone encountered the same issue?
Curent state:
rbigeard@ROMAINWORK199A:~|⇒ az aks list -o table
Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
------------- ---------- --------------- ------------------- ------------------- ----------------------------------------------------------
K8Cluster westus2 K8 1.8.1 Failed
K8Cluster2 westus2 K8 1.8.1 Failed
K8test westus2 K8 1.7.7 Failed
K8TestCluster westus2 K8Test 1.7.7 Failed
myK8Cluster westus2 myK8Group 1.7.7 Failed myk8cluste-myk8group-5ec36a-b448f367.hcp.westus2.azmk8s.io
myK8s westus2 myK8Group 1.8.1 Failed
Creation error in a brand new empty resource group in westus2:
az aks create --name K8TestCluster --resource-group K8Test --agent-count 1 --generate-ssh-keys
Deployment failed. Correlation ID: 27476ee2-fea2-406a-83bd-89de89d7aec1. getAndWaitForManagedClusterProvisioningState error: <nil>
The version of the cli is (I run it in WSL):
az --version
azure-cli (2.0.20)
acr (2.0.14)
acs (2.0.18)
appservice (0.1.19)
backup (1.0.2)
batch (3.1.6)
batchai (0.1.2)
billing (0.1.6)
cdn (0.0.10)
cloud (2.0.9)
cognitiveservices (0.1.9)
command-modules-nspkg (2.0.1)
component (2.0.8)
configure (2.0.12)
consumption (0.1.6)
container (0.1.12)
core (2.0.20)
cosmosdb (0.1.14)
dla (0.0.13)
dls (0.0.16)
eventgrid (0.1.5)
extension (0.0.5)
feedback (2.0.6)
find (0.2.7)
interactive (0.3.11)
iot (0.1.13)
keyvault (2.0.13)
lab (0.0.12)
monitor (0.0.11)
network (2.0.17)
nspkg (3.0.1)
profile (2.0.15)
rdbms (0.0.8)
redis (0.2.10)
resource (2.0.17)
role (2.0.14)
servicefabric (0.0.5)
sql (2.0.14)
storage (2.0.18)
vm (2.0.17)
Python location '/opt/az/bin/python3'
Extensions directory '/home/rbigeard/.azure/cliextensions'
Python (Linux) 3.6.1 (default, Oct 18 2017, 20:41:18)
[GCC 4.8.4]
Legal docs and information: aka.ms/AzureCliLegal
Apologies for the service disruption. There was a provisioning/capacity issue that impacted the regional Kubernetes service which was resolved today. You can view the resolution updates @ https://github.com/Azure/AKS/issues/2
Active status on additional known Kubernetes issues are being tracked @ https://github.com/Azure/AKS/issues.