I am trying to Dynamically provision storage using a storageclass I've defined with type azure-file. I've tried setting both the parameters in the storageclass for storageAccount and skuName. Here is my example with storageAccount set.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azuretestfilestorage
namespace: kube-system
provisioner: kubernetes.io/azure-file
parameters:
storageAccount: <storage_account_name>
The storageclass is created successfully however when I try to create a persistent volume claim using this storage class the persistent volume create fails with this error:
Failed to provision volume with StorageClass "azuretestfilestorage": failed to find a matching storage account
Here is the code for my persistentvolumeclaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: logging-persistent-volume-claim-test
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: azuretestfilestorage
My storageaccount is definitely in the same resource group and data center location as my acs cluster. My understanding is that a secret, persistent volume, and file share should be automatically generated. Instead I just get stuck in a pending state w/ the above error.
Here is the output of my kubectl version command
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.7", GitCommit:"8e1552342355496b62754e61ad5f802a0f3f1fa7", GitTreeState:"clean", BuildDate:"2017-09-28T23:56:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Any input would be appreciated. Thanks!
I emailed microsoft azure support about this and received an answer.
There is a bug in acs kubernetes version 1.7.7 that does not allow for dynamic persistent volume claims to work if your --cluster-name value in “/etc/kubernetes/manifests/kube-controller-manager.yaml” of the master node VM is greater than 16 characters. Very obscure bug. The fix is to upgrade your cluster or re-deploy with a different name.
Here is bug report: https://github.com/andyzhangx/demo/blob/master/issues/azurefile-issues.md#4-azure-file-dynamic-provision-failed-due-to-cluster-name-length-issue