I'm trying to setup a monitoring of our Kubernetes cluster but it's not that easy. In the first time I tried on a dedicated VM to scrap all metrics following configs I can find on Internet and prometheus.io but I read several time it's not the best way to do it. I found a suggestion to use kube-state-metrics, it's done, the pod is running and metrics are reachable from outside (Azure infra). so http://xxx.xxx.xxx.xxx:8080/metrics is showing me a correct result.
When I add this to the config:
- job_name: 'Kubernetes-Nodes'
scheme: http
#tls_config:
#insecure_skip_verify: true
kubernetes_sd_configs:
- api_server: 'http://xxx.xxx.xxx.xxx:8080'
role: endpoints
namespaces:
names: [default]
#tls_config:
#insecure_skip_verify: true
bearer_token: %VERYLONGLINE%
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
The log I can find is :
Sep 25 06:53:59 monitoring001 prometheus[59005]: level=error ts=2018-09-25T06:53:59.636669498Z caller=main.go:234 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:288: Failed to list *v1.Pod: serializer for text/html; charset=utf-8 doesn't exist"
Anyone has an idea ?
Thank you,
Finally found the issue ! My Prometheus is located on a dedicated VM outside Kubernetes cluster.
Kube-state-metrics is exposing metrics from an IP outside of the cluster,because of this, it's not necessary to scrap metrics like a kubernetes object, it's just necessary to scrap metrics like a simple target