I have a kubernetes cluster running on digital ocean that I want to monitor. When querying the exposed cAdvisor metrics on <apiserver>/api/v1/nodes/<nodename>/proxy/metrics/cadvisor
I get the following results for the container_cpu_load_average_10s
# HELP container_cpu_load_average_10s Value of container cpu load average over the last 10 seconds.
# TYPE container_cpu_load_average_10s gauge
container_cpu_load_average_10s{container="",id="/",image="",name="",namespace="",pod=""} 0 1579564900287
container_cpu_load_average_10s{container="",id="/docker/0da952be93af76ef4f89c82d39ffc994814386013b0313db0e376ba8c1ca52ec",image="gcr.io/google-containers/hyperkube:v1.16.2",name="kubelet",namespace="",pod=""} 0 1579564899268
container_cpu_load_average_10s{container="",id="/kubepods",image="",name="",namespace="",pod=""} 0 1579564900316
container_cpu_load_average_10s{container="",id="/kubepods/besteffort",image="",name="",namespace="",pod=""} 0 1579564903221
container_cpu_load_average_10s{container="",id="/kubepods/besteffort/pod05e648ab-0d69-46e7-97f5-53fa5547e631",image="",name="",namespace="default",pod="sh2-74cdb7f89b-7wmn2"} 0 1579564889468
container_cpu_load_average_10s{container="",id="/kubepods/besteffort/pod1d3d6f5c-8b8f-47df-87e1-e6796b6c8cac",image="",name="",namespace="kube-system",pod="kubelet-rubber-stamp-7f966c6779-9pj2x"} 0 1579564897907
container_cpu_load_average_10s{container="",id="/kubepods/besteffort/pod35f81ba8-c778-4771-8103-ca6a1f1df3b3",image="",name="",namespace="kube-system",pod="cilium-operator-d5cd7d758-jlc7g"} 0 1579564902427
container_cpu_load_average_10s{container="",id="/kubepods/besteffort/pod7c42ac9d-14e2-4773-9f6b-78745e065d98",image="",name="",namespace="default",pod="sh-68d446d656-pr6lw"} 0 1579564893074
container_cpu_load_average_10s{container="",id="/kubepods/besteffort/pod87c517f4-be8d-4eeb-b550-7edd7b6629c7",image="",name="",namespace="ingress",pod="haproxy-ingress-c5fc9f5d-zbmc7"} 0 1579564903152
container_cpu_load_average_10s{container="",id="/kubepods/besteffort/poda137a036-0931-4d38-a39e-24269eda4558",image="",name="",namespace="kube-system",pod="metrics-server-7cdf9b7694-9ngsb"} 0 1579564906312
The metrics value is actually two values, the first is always 0 and then something in the area of 1579564906312.
I'm new to prometheus and I thought a metric can only have one value, but apparently cadvisor exposes two values. Is this a bug or something I don't know yet about prometheus? If it's not a bug how should I treat it, because the prometheus browser only shows the first value which is 0.
Each metric has dimensions to it. So, it seems like the same metric, but it's only the name. What distinguishes these metrics from eachother is their labels.
If you look closely at your screenshot, you'll find these are CPU Loads of different pods of different services.
EDIT: The format that Prometheus exposes its metric is
metric_name [
"{" label_name "=" `"` label_value `"` { "," label_name "=" `"` label_value `"` } [ "," ] "}"
] value [ timestamp ]
This hints the last number is a timestamp.
Read more in https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md