I'm using the helm prometheus-operator chart: https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml and I expected it to get my custom metrics from my golang api as I did previously by "hardcoding" the name of the service and the port in the values.yml file:
scrape_configs:
- job_name: 'custom-api'
static_configs:
- targets: ['custom-api-service.backend.svc.cluster.local:8000']
However, as I have more microservices I know that it can also be done dynamically using the _meta
tags. Example: __meta_kubernetes_service_name
However, I haven't figure it out what should I modify from the values.yaml file to make it work.
Grafana is getting my cpu and memory usage from the custom-api
but custom-api
is not appearing in the targets tab from the prometheus dashboard which is weird...
These are my services:
apiVersion: v1
kind: Service
metadata:
name: custom-api-service
namespace: backend
labels:
service: custom-api-service
spec:
type: NodePort
ports:
- port: 8000
targetPort: 8000
nodePort: 30080
protocol: TCP
name: custom-api
selector:
component: goapi
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: servicemonitor
namespace: backend
labels:
service: servicemonitor
spec:
selector:
matchLabels:
service: custom-api-service
endpoints:
- port: custom-api
You will have to create a Service monitor CRD to scrape your metrics.
Let's say you have a k8s service (here: example-app
) which is used to communicate with your microservices. Make sure that your microservice exposes Prometheus metrics at a certain port and the k8s service also includes that port (here: prom
).
kind: Service
apiVersion: v1
metadata:
name: example-app
labels:
app: example-app
spec:
selector:
app: example-app
ports:
- name: prom
port: 8080
- name: other-port
port: xxxx
This Service object is discovered by a ServiceMonitor, which selects in the same way. You need to make sure that the matchLabels
of serviceMonitor object matchs the metadata.labels
of the service.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
endpoints:
- port: prom
Once you have created the serviceMonitor object, the operator controller will do the rest for you (ie. update the Prometheus configuration). You can also provide custom configuration via serviceMonitor object.
For more details visit Getting started with Prometheus operator.
The Prometheus
resource includes a field called serviceMonitorSelector
, which defines a selection of ServiceMonitors
to be used. By default and before the version v0.19.0, ServiceMonitors
must be installed in the same namespace as the Prometheus instance. With the Prometheus Operator v0.19.0 and above, ServiceMonitors
can be selected outside the Prometheus namespace via the serviceMonitorNamespaceSelector
field of the Prometheus
resource
In the monitoring namespace create a Prometheus
object which selects the ServiceMonitor
by label service: servicemonitor
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
spec:
serviceAccountName: <service-account-name>
serviceMonitorSelector:
matchLabels:
service: servicemonitor
resources:
requests:
memory: 400Mi
enableAdminAPI: false
The serviceAccountName
you can find out in monitoring
namespace as helmreleasename-prometheus-operator-prometheus