I have installed prometheus into my Kubernetes v1.17 KOPS cluster following kube-prometheus, ensuring the --authentication-token-webhook=true and --authorization-mode=Webhook prerequisets are set and the kube-prometheus/kube-prometheus-kops.libsonnet configuration specified.
I have then installed Postgres using https://github.com/helm/charts/tree/master/stable/postgresql using the supplied values-production.yaml with the following set:
metrics:
  enabled: true
  # resources: {}
  service:
    type: ClusterIP
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "9187"
    loadBalancerIP:
  serviceMonitor:
    enabled: true
    namespace: monitoring
    interval: 30s
    scrapeTimeout: 10sBoth services are up and working, but prometheus doesn't discover any metrics from Postgres.
The logs on the metrics container on my postgres pods have no errors, and neither do any of the pods in the monitoring namespace.
What additional steps are required to have the Postgres metrics exporter reach Prometheus?
Try to update ClusterRole for Prometheus. By default, it hasn't permissions to retrieve a list of pods, services, and endpoints from non-monitoring namespace.
In my system the original ClusterRole was:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus-k8s
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- nonResourceURLs:
  - /metrics
  verbs:
  - getI've changed it to:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus-k8s
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  - services
  - endpoints
  - pods
  verbs:
  - get
  - list
  - watch
- nonResourceURLs:
  - /metrics
  verbs:
  - getAfter those changes, Postgres metrics will be available for Prometheus.