Promtail, Grafana, Loki version is 2.4.1. running is Kubernetes.
I was following the documentation.
I was expecting that error stacktrace will be in a single entry in grafana/loki but every line is a separate entry. Am I missing some configs?
# cat /etc/promtail/promtail.yaml
server:
log_level: info
http_listen_port: 3101
client:
url: http://***-loki:3100/loki/api/v1/push
positions:
filename: /run/promtail/positions.yaml
scrape_configs:
# See also https://github.com/grafana/loki/blob/master/production/ksonnet/promtail/scrape_config.libsonnet for reference
- job_name: kubernetes-pods
pipeline_stages:
- multiline:
firstline: ^\x{200B}\[
max_lines: 128
max_wait_time: 3s
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_controller_name
regex: ([0-9a-z-.]+?)(-[0-9a-f]{8,10})?
action: replace
target_label: __tmp_controller_name
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_name
- __meta_kubernetes_pod_label_app
- __tmp_controller_name
- __meta_kubernetes_pod_name
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: app
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_component
- __meta_kubernetes_pod_label_component
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: component
- action: replace
source_labels:
- __meta_kubernetes_pod_node_name
target_label: node_name
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
replacement: $1
separator: /
source_labels:
- namespace
- app
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- action: replace
replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- action: replace
regex: true/(.*)
replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash
- __meta_kubernetes_pod_annotation_kubernetes_io_config_hash
- __meta_kubernetes_pod_container_name
target_label: __path__
It turned out that the logs look different than what we see in lens pod logs or kubectl logs {pod}
.
The original logs consumed by promtail can be found on the host machine:
minikube ssh
cat /var/log/pods/{namespace}_{pod}/{container}/0.log
They look something like this:
{"log":"[default-nioEventLoopGroup-1-1] INFO HTTP_ACCESS_LOGGER - \"GET /health/readiness HTTP/1.1\" 200 523\n","stream":"stdout","time":"2021-12-17T12:26:29.702621198Z"}
So, the firstline regexp did not match any log line. Unfortunatly, there are no errors about this in the promtail logs.
This is the docker log format and there is a pipeline stage to parse this:
- docker: {}
Additionally there was an issue in the logs. There were extra line breaks in the multi line stacktrace, so this additional pipeline stage filters them:
- replace:
expression: '(\n)'
replace: ''
So my working config looks like this:
server:
log_level: info
http_listen_port: 3101
client:
url: http://***-loki:3100/loki/api/v1/push
positions:
filename: /run/promtail/positions.yaml
scrape_configs:
# See also https://github.com/grafana/loki/blob/master/production/ksonnet/promtail/scrape_config.libsonnet for reference
- job_name: kubernetes-pods
pipeline_stages:
- docker: {}
- multiline:
firstline: ^\x{200B}\[
max_lines: 128
max_wait_time: 3s
- replace:
expression: (\n)
replace: ""
#config continues below (not copied)