I must just not understand PromQL yet, but everything I read says this query should work fine:
node_cpu
Really simple right? Name of my metric, and I do get them in my result set.
node_cpu{app="prometheus",chart="prometheus-6.2.1",component="node-exporter",cpu="cpu0",heritage="Tiller",instance="10.85.166.16:9100",io_cattle_field_appId="prometheus",job="kubernetes-service-endpoints",kubernetes_name="prometheus-node-exporter",kubernetes_namespace="prometheus",mode="guest_nice",release="prometheus"} 0 node_cpu{app="prometheus",chart="prometheus-6.2.1",component="node-exporter",cpu="cpu0",heritage="Tiller",instance="10.85.166.16:9100",io_cattle_field_appId="prometheus",job="kubernetes-service-endpoints",kubernetes_name="prometheus-node-exporter",kubernetes_namespace="prometheus",mode="idle",release="prometheus"} 1784679.96
node_cpu{app="prometheus",chart="prometheus-6.2.1",component="node-exporter",cpu="cpu0",heritage="Tiller",instance="10.85.166.16:9100",io_cattle_field_appId="prometheus",job="kubernetes-service-endpoints",kubernetes_name="prometheus-node-exporter",kubernetes_namespace="prometheus",mode="iowait",release="prometheus"} 2897.73
But I also get a ton of other, unwanted metrics:
kubelet_runtime_operations_latency_microseconds_count{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",instance="la-1pk8s-w4",job="kubernetes-nodes",kubernetes_io_hostname="la-1pk8s-w4",node_role_kubernetes_io_worker="true",operation_type="image_status"}
container_start_time_seconds{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",id="/docker/8effa9b35affbf17118e7cc83a586d70da9fa960097ab717076c7251bf4eb324",image="rancher/rke-tools:v0.1.13",instance="la-1pk8s-w2",job="kubernetes-nodes-cadvisor",kubernetes_io_hostname="la-1pk8s-w2",name="rke-log-linker-nginx-proxy",node_role_kubernetes_io_worker="true"}
storage_operation_duration_seconds_bucket{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",instance="la-1pk8s-w4",job="kubernetes-nodes",kubernetes_io_hostname="la-1pk8s-w4",le="0.1",node_role_kubernetes_io_worker="true",operation_name="volume_unmount",volume_plugin="kubernetes.io/configmap"}
Not sure why they are there, strange. So I figure I'll filter on the label component="node-exporter" since that label only exists in the metrics I want.
node_cpu{component="node-exporter"} yields the same result set.
node_cpu{component=~"node-exporter"} yields same result set.
Why can't I just get all node_cpu metrics and why is the filtering not working? Thanks.
Either this is a bug that was fixed in 2.3.0, or you have a remote_read that's returning undesired results.