I spun up a 1000 pods on my cluster and ~800 of them got stuck in ContainerCreating with the status:
Warning FailedMount 8s kubelet, k8s-alsjkdf Unable to mount volumes for pod "test-xvbbf_default(05706f3d-12a2-11e8-9e41-000d3a028eee)": timeout expired waiting for volumes to attach/mount for pod "default"/"test-xvbbf". list of unattached/unmounted volumes=[some list of volumes] Warning FailedSync 8s kubelet, k8s-alsjkdf Error syncing pod
I noticed that ~60 of my pods were running at one time. All these pods share the same PVCs.
I couldn't find any mention of a limit in the Kubernetes documentation. The documentation for AzureFiles, which back the PVs, state that they support like 2000 (or something like that) concurrent handlers so I don't think that's an issue.
Is this a known limit in Kubernetes, or is it in some configuration?
Note: The pods all eventually completed, I'm not worried about that.
I think you're running into a limitation on the number of disks that can be attached to a node at any given time.
For a Basic Tier VM, the maximum number of highly utilized disks is about 66 (20,000/300 IOPS per disk), and for a Standard Tier VM, it is about 40 (20,000/500 IOPS per disk).