I am doing well but I need help optimizing my Kubernetes cluster. I have an Azure autoscalable Kubernetes cluster with 3 nodes. As requests overwhelm the system, the cluster will scale up according to the demands.
I have a Persistent Volume in the cluster, storing all the critical server resources which are shared across all my master and worker pods. I am using AzureFile to mount the server resources onto my pods (via the Persistent Volume). I upload the files to the AzureFile and let the pods fetch the resources from the volume mount. The size of the server resources that the pods are using, is about 3.5GiB, relatively big.
Issue is, I don't know how to measure the time a pod takes to mount the server resources from the Persistent Volume everytime a new pod is started or when a new node is scaled up to meet the demands. I need to compare it with the time taken to download the server resources using blobfuse.
I did not find a universal way to calculate the time taken for a pod to mount the persistent volume but I did manage to benchmark the performance of my worker when using Azure Premium File share as compared to using Standard File share to mount onto my Persistent Volume in the Kubernetes cluster. The performance was more than twice faster! Incredible speed.
Anyway, the following links may be useful for the rest looking in this direction hoping to find an answer too. Microsoft Azure did document the details of Azure Files in this link - https://docs.microsoft.com/en-us/azure/storage/files/storage-files-scale-targets
Here you can find the target throughput for a single file share - up to 60MiB/sec for Standard tier, as well as the maximum egress for a single file share - up to 6,204 MiB/sec for Premium tier. Huge difference.