I am currently trying to work on monitoring my company's Gitlab that runs on a kubernetes cluster. We have deployed a separate prometheus (we are not using the bundled prometheus). Currently my problem is that prometheus doesn't scrape a whole lot of the metrics. I do get a lot of Gitaly but none of the exposed gitlab metrics.
As of right now I have gone to the metrics endpoint to check which metrics are exposed there. I have checked the cluster and the servicemonitor that points to the correct endpoint <url>/-/metrics
is there and has been picked up by the prometheus operator/service discovery.
I have checked prometheus itself and it appears under service discovery
I am at a complete loss right now what the issue could be. I have tried reinstalling the gitlab instance and killing the prometheus pod to maybe kickstart the scraping process but that did not seem to work.
Hopefully someone else here knows what could be the issue.
I decided to double check everything and I figured out the issue was due to a port that wasn't set right at the service (by someone else) so as much as prometheus was trying to scrape at port XYZ
it doesn't get any metrics because the port was not the correct one.
Maybe this is now possible, with GitLab 14.9 (March 2022):
GitLab chart improvements
GitLab 14.9 introduces the ability to use Prometheus
ServiceMonitor
orPodMonitor
objects instead of annotations on each of the GitLab components which expose Prometheus metrics.This change allows the usage of the Prometheus Operator to monitor a GitLab instance without supplemental configuration outside of the GitLab chart.
A result of this change is that we now expose metrics on dedicated ports of the Webservice chart, removing access via the primary service port.
See Documentation.