No kafka metrics in Grafana/prometheus

7/17/2018

I successfully deployed helm chart prometheus operator, kube-prometheus and kafka (tried both image danielqsj/kafka_exporter v1.0.1 and v1.2.0).

Install with default value mostly, rbac are enabled.

I can see 3 up nodes in Kafka target list in prometheus, but when go in Grafana, I can's see any kafka metric with kafka overview

Anything I missed or what I can check to fix this issue?

I can see metrics start with java_, kafka_, but no jvm_ and only few jmx_ metrics.

enter image description here

I found someone reported similar issue (https://groups.google.com/forum/#!searchin/prometheus-users/jvm_%7Csort:date/prometheus-users/OtYM7qGMbvA/dZ4vIfWLAgAJ), So I deployed with old version of jmx exporter from 0.6 to 0.9, still no jvm_ metrics.

Are there anything I missed?

env:

kuberentes: AWS EKS (kubernetes version is 1.10.x)

public grafana dashboard: kafka overview

-- Bill
amazon-eks
apache-kafka
kubernetes
kubernetes-helm

2 Answers

7/17/2018

You have to turn on jmx and exporter for kafka helm chart providing --set prometheus.jmx.enabled=true,prometheus.kafka.enabled=true. The values are false per default.

-- abinet
Source: StackOverflow

7/17/2018

Just realised the owner of jmx-exporter mentioned in README:

This exporter is intended to be run as a Java Agent, exposing a HTTP server and serving metrics of the local JVM. It can be also run as an independent HTTP server and scrape remote JMX targets, but this has various disadvantages, such as being harder to configure and being unable to expose process metrics (e.g., memory and CPU usage). Running the exporter as a Java Agent is thus strongly encouraged.

Not really understood what's that meaning, until I saw this comment:

https://github.com/prometheus/jmx_exporter/issues/111#issuecomment-341983150

@brian-brazil can you add some sort of tip to the readme that jvm_* metrics are only exposed when using the Java agent? It took me an hour or two of troubleshooting and searching old issues to figure this out, after playing only with the HTTP server version. Thanks!

So jmx-exporter has to be run with java agent to get jvm_ metric. jmx_prometheus_httpserver doesn't support, but it is the default setting in kafka helm chart.

https://github.com/kubernetes/charts/blob/master/incubator/kafka/templates/statefulset.yaml#L82

command:
- sh
- -exc
- |
  trap "exit 0" TERM; \
  while :; do \
  java \
  -XX:+UnlockExperimentalVMOptions \
  -XX:+UseCGroupMemoryLimitForHeap \
  -XX:MaxRAMFraction=1 \
  -XshowSettings:vm \
  -jar \
  jmx_prometheus_httpserver.jar \              # <<< here
  {{ .Values.prometheus.jmx.port | quote }} \
  /etc/jmx-kafka/jmx-kafka-prometheus.yml & \
  wait $! || sleep 3; \
  done
-- Bill
Source: StackOverflow