K8S monitoring stack configuration with alerts

10/22/2019

I am trying to set up a k8s monitoring stack for my on-premises cluster. What I want to set up is:

  • Prometheus
  • Grafana
  • Kube-state-metrics
  • Alertmanager
  • Loki

I can find a lot of resources to do that like:

I have doubts regarding the configuration of the alert notifications.

  • All three setups mentioned above have Grafana UI. So, there is an option to configure alert rules and notification channels via that UI.

  • But in the first option, Prometheus rules are configured with Prometheus setup and notification channels are configured with the alert-manager setup using configMap CRDs.

Which is the better configuration option?

What is the difference in setting up alerts via Grafana UI & Prometheus rules and channels via such configMap CRDs?

What are the advantages and disadvantages of both methods?

-- AnjanaDyna
grafana
kubernetes
monitoring
prometheus
prometheus-alertmanager

1 Answer

11/12/2019

I chose the third option to setup prometheus-operator in a namespace. Because this chart configures prometheus, grafana, and alertmanager. Prometheus is added as a datasource in grafana by default. It allows adding additional alert rules for Prometheus , datasources, and dashboards for grafana using chart's values file.

Then Configured Loki in the same namespace and added it as a datasource in grafana. Also configured a webhook to redirect the notifications from alertmanager to MS teams.

-- AnjanaDyna
Source: StackOverflow