format of fluentd logs

9/19/2019

I'm having issues figuring out how to parse logs in my k8s cluster using fluentd.

I get some logs that look like 2019-09-19 16:05:44.257 [info] some log message

I would like to parse out the time, log level, and log

I've tested the regex here and it seems to parse out the pieces I need. But when I add that to my config map that contains the fluentd configuration when my logs ship I still just see one long log message that looks like the above and level is not parsed out. Brand new to using fluentd so I'm not sure how to do this.

config map looks like

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-config-map
  namespace: logging
 labels:
  k8s-app: fluentd-logzio
data:
fluent.conf: |-
<match fluent.**>
    # this tells fluentd to not output its log on stdout
    @type null
</match>

# here we read the logs from Docker's containers and parse them
 <source>
  @id fluentd-containers.log
  @type tail
  path /var/log/containers/*.log
  pos_file /var/log/es-containers.log.pos
  tag raw.kubernetes.*
  read_from_head true
  <parse>
    @type multi_format
    <pattern>
      format /^(?<time>.+) (\[(?<level>.*)\]) (?<log>.*)$/
      time_format %Y-%m-%dT%H:%M:%S.%N%:z
    </pattern>
    <pattern>
      format json
      time_key time
      time_format %Y-%m-%dT%H:%M:%S.%NZ
    </pattern>
    <pattern>
      format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
      time_format %Y-%m-%dT%H:%M:%S.%N%:z
    </pattern>

  </parse>
</source>

# Detect exceptions in the log output and forward them as one log entry.
<match raw.kubernetes.**>
  @id raw.kubernetes
  @type detect_exceptions
  remove_tag_prefix raw
  message log
  stream stream
  multiline_flush_interval 5
  max_bytes 500000
  max_lines 1000
</match>

# Concatenate multi-line logs
<filter **>
  @id filter_concat
  @type concat
  key message
  multiline_end_regexp /\n$/
  separator ""
</filter>

# Enriches records with Kubernetes metadata
<filter kubernetes.**>
  @id filter_kubernetes_metadata
  @type kubernetes_metadata
</filter>

<match kubernetes.**>
  @type logzio_buffered
  @id out_logzio
  endpoint_url ###
  output_include_time true
  output_include_tags true
  <buffer>
    # Set the buffer type to file to improve the reliability and reduce the memory consumption
    @type file
    path /var/log/fluentd-buffers/stackdriver.buffer
    # Set queue_full action to block because we want to pause gracefully
    # in case of the off-the-limits load instead of throwing an exception
    overflow_action block
    # Set the chunk limit conservatively to avoid exceeding the GCL limit
    # of 10MiB per write request.
    chunk_limit_size 2M
    # Cap the combined memory usage of this buffer and the one below to
    # 2MiB/chunk * (6 + 2) chunks = 16 MiB
    queue_limit_length 6
    # Never wait more than 5 seconds before flushing logs in the non-error case.
    flush_interval 5s
    # Never wait longer than 30 seconds between retries.
    retry_max_interval 30
    # Disable the limit on the number of retries (retry forever).
    retry_forever true
    # Use multiple threads for processing.
    flush_thread_count 2
  </buffer>
</match>
-- Matthew The Terrible
docker
fluentd
kubernetes
logging

0 Answers