How Can I Monitor Persistent Volume Metrics in Kubernetes 1.13?

7/18/2019

I have a kubernetes 1.13 cluster running on Azure and I'm using multiple persistent volumes for multiple applications. I have setup monitoring with Prometheus, Alertmanager, Grafana.

But I'm unable to get any metrics related to the PVs.

It seems that kubelet started to expose some of the metrics from kubernetes 1.8 , but again stopped since 1.12

I have already spoken to Azure team about any workaround to collect the metrics directly from the actual FileSystem (Azure Disk in my case). But even that is not possible.

I have also heard some people using sidecars in the Pods to gather PV metrics. But I'm not getting any help on that either.

It would be great even if I get just basic details like consumed / available free space.

-- AmartyaAC
kubernetes
persistent-volumes
prometheus

1 Answer

7/19/2019

I'm was having the same issue and solved it by joining two metrics:

avg(label_replace(
1 - node_filesystem_free_bytes{mountpoint=~".*pvc.*"} / node_filesystem_size_bytes,
"volumename", "$1",  "mountpoint", ".*(pvc-[^/]*).*")) by (volumename) 
+ on(volumename) group_left(namespace, persistentvolumeclaim)
(0 * kube_persistentvolumeclaim_info)

As an explanation, I'm adding a label volumename to every time-series of node_filesystem*, cut out of the existing mountpoint label and then joining with the other metrics containing the additional labels. Multiplying by 0 ensures this is otherwise a no-op.

Also quick warning: I or you may be using some relabeling configs making this not work out immediately without adaptation.

-- TheAnonym
Source: StackOverflow