In Prometheus expression browsers, executing "container_cpu_usage_seconds_total metric" got results with different labels in different k8s clusters.
cluster 1 (k8s v1.15.9):
container_cpu_usage_seconds_total{container="POD",container_name="POD",cpu="total",endpoint="https-metrics",id="/kubepods/besteffort/pod00xxxxx-ef9f-4959-b2cf-95e9c6dba800/bbff610aeeb79874c69228068f07b9c3a395a3933b33387fd681ef91aa188897",image="reg.k8s.io/google_containers/pause-amd64:3.1",instance="192.168.110.120:10250",job="kubelet",name="k8s_POD_guestbook-ui-57d98b678-w5csk_argo-cd_0068264c-ef9f-4959-b2cf-95e9c6dba800_0",namespace="argo-cd",node="k8s-w5",pod="guestbook-ui-57d98b678-w5csk",pod_name="guestbook-ui-57d98b678-w5csk",service="kubelet"}
cluster 2(k8s v1.18.10):
container_cpu_usage_seconds_total{cpu="total", endpoint="https-metrics", id="/kubepods/besteffort/pod07a4289a-9ae4-42fd-a7d5-5fe7d8680071", instance="192.168.120.10:10250", job="kubelet", metrics_path="/metrics/cadvisor", namespace="eds", node="cluster-master-1", pod="kong-7dc748b8d5-5x5qf", service="kube-kube-prometheus-stack-kubelet"}
No "image" "container" labels found in the second cluster. How can I configure the second one to make it have the missing labels?
A difference exists in their scrape_configs.
The second one doesn't have metric_relabel_configs. Does the metric_relabel_configs affects the available labels?
metric_relabel_configs:
- source_labels: [__name__, image]
separator: ;
regex: container_([a-z_]+);
replacement: $1
action: drop
- source_labels: [__name__]
separator: ;
regex: container_cpu_usage_seconds_total|container_memory_usage_bytes|container_memory_cache|container_network_.+_bytes_total|container_memory_working_set_bytes
replacement: $1
action: keep
When I executed the curl command in the second cluster, in fact the result had image and container labels, but with Empty value.
curl -k --header "Authorization: Bearer $TOKEN" https://[k8s_ip]:10250/metrics/cadvisor
Result:
container_cpu_usage_seconds_total{container="",cpu="total",id="/kubepods/besteffort/pod07a4289a-9ae4-42fd-a7d5-5fe7d8680071",image="",name="",namespace="eds",pod="kong-7dc748b8d5-5x5qf"} 738.009191131 1617976437601
Your symptoms are somewhat similar to this issue.
The high-level symptom is that
curl /metrics
returns blank for image, namespace, etc.It appears that kubelet's view of the universe has diverged significantly from Docker's hence it does not have the metadata to tag container metrics.
In my case I was running Docker with a nonstandard root directory, and telling kubelet this explictly via --docker-root
fixed the problem.