I have a metrics endpoint /actuator/prometheus
that is protected by authentication. I have set up a jvm monitoring job on prometheus using below method:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
replicas: 1
selector:
...
strategy:
...
template:
metadata:
labels:
app: ...
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/actuator/prometheus"
prometheus.io/port: "80"
spec:
...
xxx-jvm
under extraScrapeConfigs
to specify the credentialsextraScrapeConfigs: |
- job_name: 'xxx-jvm'
kubernetes_sd_configs:
- role: pod
basic_auth:
username: ...
password: ...
relabel_configs:
...
Then I checked prometheus dashboard Status
-> Targets
tab, the xxx-jvm
has 1 endpoint which shows up
(the jvm metrics all shows up correctly)
But in kubernetes-pods
section, there is a same endpoint that shows down
, this is causing the alert up == 0
keep firing
It looks like the prometheus.io/scrape: "true"
annotation also added a endpoint in kubernetes-pods
besides the xxx-jvm
job, and it does not use the credentials I specified in xxx-jvm
job config. How do I specify the credentials for this endpoint in kubernetes-pods
? Or how can I make it so that xxx-jvm
job and kubernetes-pods
use different path?