Fluentd parser types with Kibana

10/31/2018

I am using the EFK stack in my Kubernetes cluster to parse logs from an nginx controller. While I can specify the type of a field in Fluentd and it gets shipped correctly to Elasticsearch, Kibana does not recognize the field as numeric.

  • elasticsearch:v6.2.5
  • kibana-oss:6.2.4
  • fluentd-elasticsearch:v2.2.0

Fluentd config

<filter kubernetes.var.log.containers.nginx-ingress-controller-**.log>
  @type parser
  format /(?<remote_addr>[^ ]*) - \[(?<proxy_protocol_addr>[^ ]*)\] - (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<request>[^\"]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*) "(?<referer>[^\"]*)" "(?<agent>[^\"]*)" (?<request_length>[^ ]*) (?<request_time>[^ ]*) \[(?<proxy_upstream_name>[^ ]*)\] (?<upstream_addr>[^ ]*) (?<upstream_response_length>[^ ]*) (?<upstream_response_time>[^ ]*) (?<upstream_status>[^ ]*) (?<upstream_id>[^ ]*)/
  time_format %d/%b/%Y:%H:%M:%S %z
  key_name log
  # Retain the original "log" field after parsing out the data.
  reserve_data true

  # These get sent to ES as the correct types.
  types request_length:integer,request_time:float
</filter>

They appear as numeric in the document:

"request_length": 426,
"request_time": 0.007,

But Kibana still calls them strings

kibana

I've refreshed the index and even deleted and recreated it in Kibana, but still the same.

-- duffn
elasticsearch
fluentd
kibana
kubernetes

0 Answers