Logs are not shown in order after shipped to Elasticsearch using Fluentd

1/21/2020

We have an application deployed in Kubernetes and all applications are configured to log to stdout.We use fluentd DaemonSet to collect logs from /var/lib/docker/containers/ folders and ship them to an ElasticSearch cluster. The k8s aggregated log files in the /var/lib/docker/containers/ and kubectl logs <podname> both has the log lines in the order application produces them correctly. The ElasticSearch cluster is connected to a Kibana instance and the logs are displayed sorted by @timestamp , and here the order of the logs are incorrect (not in the same order as in the flie). Due to the nature of the application there are multiple logs produced within the same @timestamp and only those are not ordered correctly. Is there a way to send a line number or offset (incrementing number) when shipping from fluentd and use a combination of @timestamp and that value to sort properly? Or is there another way to get the same order as displayed from kubectl logs <podname> after indexing?

-- Chamila Liyanage
efk
elasticsearch
fluentd
kibana
kubernetes

1 Answer

1/22/2020

Docker containers are set to use the UTC timestamp, so if you're shipping your docker logs to your fluentd container using the fluentd log driver, then the logs will get stamped with the fluentd container's UTC stamp. But if you send a log message to the fluentd container using the journald log driver, then it will retain the local times stamp. You can adjust the container's timestamp to local time in the docker-compose file like this:

environment:
  - TZ=America/Los_Angeles debian:jessie date
-- Octavian
Source: StackOverflow