I have installed kube-prometheus-stack as a dependency in my helm chart on a local docker for Mac Kubernetes cluster v1.19.7. I can view the default prometheus targets provided by the kube-prometheus-stack.
I have a python flask service that provides metrics which I can view successfully in the kubernetes cluster using kubectl port forward
.
However, I am unable to get these metrics displayed on the prometheus targets web interface.
The kube-prometheus-stack documentation states that Prometheus.io/scrape does not support annotation-based discovery of services. Instead the the reader is referred to the concept of ServiceMonitors
and PodMonitors
.
So, I have configured my service as follows:
---
kind: Service
apiVersion: v1
metadata:
name: flask-api-service
labels:
app: flask-api-service
spec:
ports:
- protocol: TCP
port: 4444
targetPort: 4444
name: web
selector:
app: flask-api-service
tier: backend
type: ClusterIP
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: flask-api-service
spec:
selector:
matchLabels:
app: flask-api-service
endpoints:
- port: web
Subsequently, I have setup a port forward to view the metrics:
Kubectl port-forward prometheus-flaskapi-kube-prometheus-s-prometheus-0 9090
Then visited prometheus web page at http://localhost:9090
When I select the Status->Targets menu option, my flask-api-service is not displayed.
I know that the service is up and running and I have checked that I can view the metrics for a pod for my flask-api-service using kubectl port-forward <pod name> 4444
.
Looking at a similar issue it looks as though there is a configuration value serviceMonitorSelectorNilUsesHelmValues
that defaults to true. Setting this to false makes the operator look outside it’s release labels in helm??
I tried adding this to the values.yml
of my helm chart in addition to the extraScrapeConfigs
configuration value. However, the flask-api-service still does not appear as an additional target on the prometheus web page when clicking the Status->Targets menu option.
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
extraScrapeConfigs: |
- job_name: 'flaskapi'
static_configs:
- targets: ['flask-api-service:4444']
How do I get my flask-api-service recognised on the prometheus targets page at http://localhost:9090
?
I am installing Kube-Prometheus-Stack as a dependency via my helm chart with default values as shown below:
Chart.yaml
apiVersion: v2
appVersion: "0.0.1"
description: A Helm chart for flaskapi deployment
name: flaskapi
version: 0.0.1
dependencies:
- name: kube-prometheus-stack
version: "14.4.0"
repository: "https://prometheus-community.github.io/helm-charts"
- name: ingress-nginx
version: "3.25.0"
repository: "https://kubernetes.github.io/ingress-nginx"
- name: redis
version: "12.9.0"
repository: "https://charts.bitnami.com/bitnami"
Values.yaml
docker_image_tag: dcs3spp/
hostname: flaskapi-service
redis_host: flaskapi-redis-master.default.svc.cluster.local
redis_port: "6379"
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
extraScrapeConfigs: |
- job_name: 'flaskapi'
static_configs:
- targets: ['flask-api-service:4444']
Prometheus custom resource definition has a field called serviceMonitorSelector
. Prometheus only listens to those matched serviceMonitor. In case of helm deployment it is your release name.
release: {{ $.Release.Name | quote }}
So adding this field in your serviceMonitor should solve the issue. Then you serviceMonitor manifest file will be:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: flask-api-service
labels:
release: <your_helm_realese_name_>
spec:
selector:
matchLabels:
app: flask-api-service
endpoints:
- port: web