How to relabel scraping jobs on Prometheus-operator?

8/9/2019

I'm trying the prometheus-operator for the first time, and still struggling with the differences for managing Prometheus through that.

The deployment is pretty straight-forward, and so is editing the rules, however I could not find my way when trying to relabel the exporters using static_configs when using Prometheus-operator.

What I used to do in the past was to customize prometheus.yml and add static_configs to include the labels for each one of the exporter's job names.

I understand that under Prometheus-operator's hood we have the same settings as we usually have, but I'm not sure how to achieve the same results from the static_config configuration using the operator.

From what I could understand I have to do set the relabelings now on the service monitors related to my exporters, however all the configurations I've tried had no results:

tried with metricRelabelings as descirbed on issue 1166, and StaticConfigs as described on issue 1086 without any luck.

For example this is what I used to do for kubernetes-cadvisor exporter to set a label on static_config, so that my custom label was displayed on all the metrics collected by my exporters on ingestion time:

scrape_configs:
- job_name: prometheus
static_configs:
- targets: ['localhost:9090']
labels:
kubernetes_namespace: kube-system
cluster_name: mycluster01

And also add the relabel_configs on each of my exporters job's:

- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
  ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
  - target_label: cluster_name
    replacement: mycluster01
  - target_label: kubernetes_namespace
    replacement: kube-system
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)
  - target_label: __address__
    replacement: kubernetes.default.svc:443
  - source_labels: [__meta_kubernetes_node_name]
    regex: (.+)
    target_label: __metrics_path__
    replacement: /api/v1/nodes/${1}:10250/proxy/metrics

And this is an example for achieving the same using metricRelabelings, on Prometheus-operator which is still not working for me:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: monitoring
spec:
  endpoints:
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    interval: 30s
    port: https
    scheme: https
    tlsConfig:
      insecureSkipVerify: true
  jobLabel: k8s-app
  selector:
    matchLabels:
      k8s-app: node-exporter
  metricRelabelings:
    sourceLabels: __meta_kubernetes_node_name
    targetLabel: node

What I expect to achieve is to create a static label on my exporters so all the metrics carry the custom label that I define at the scraping time instead of having to manually define custom labels to all the deployments in my cluster.

Thanks in advance for any help!

-- Felipe Silveira
kubernetes
label
prometheus
prometheus-operator

2 Answers

8/15/2019

Seems I've missed the instructions in the Operator repository.... After a closer look there I've found some very nice examples... seems the answer to my question is to create the additional scraping configuration as a secret, similar to the example at the following link: additional-scrape-configs.yaml

Some additional steps can also be found at the following: additional-scrape-config

-- Felipe Silveira
Source: StackOverflow

8/12/2019

Let's see on example how it works. First, deploy four instances of example application, which listens and exposes metrics on port 8080.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: example-application
spec:
  replicas: 4
  template:
    metadata:
      labels:
        app: example-application
    spec:
      containers:
      - name: example-application
        image: fabxc/instrumented_app
        ports:
        - name: backend
          containerPort: 8080

The ServiceMonitor has a label selector to select Services and their underlying Endpoint objects. The Service object for the example application selects the Pods by the app label having the example-application value. The Service object also specifies the port on which the metrics are exposed.

kind: Service
apiVersion: v1
metadata:
  name: example-application
  labels:
    app: example-application
spec:
  selector:
    app: example-application
  ports:
  - name: backend
    port: 8080

This Service object is discovered by a ServiceMonitor, which selects in the same way. The app label must have the value example-application.

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: example-application
  labels:
    team: backend-team
spec:
  selector:
    matchLabels:
      app: example-application
  endpoints:
  - port: backend

Prometheus object defines the serviceMonitorSelector to specify which ServiceMonitors should be included. Above the label team: backend-team was specified, so that's what the Prometheus object selects by.

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: prometheus
spec:
  serviceMonitorSelector:
    matchLabels:
      team: backend-team
  resources:
    requests:
      memory: 400Mi

This enables the backend team to create new ServiceMonitors and Services which allow Prometheus to be dynamically reconfigured.

You can also look on this site to read more informations about ServiceMonitor in Prometheus Operator.

-- muscat
Source: StackOverflow