I am getting these errors. Data is loaded into elasticsearch, but some records are missing in kibana. I am seeing this in fluentd logs in kubernetes
2021-04-26 15:58:10 +0000 [warn]: #0 failed to flush the buffer. retry_time=29 next_retry_seconds=2021-04-26 15:58:43 +0000 chunk="5c0e21cad29fc91298a9d881c6bd9873" error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="Elasticsearch returned errors, retrying. Add '@log_level debug' to your config to see the full response"
2021-04-26 15:58:10 +0000 [warn]: #0 suppressed same stacktrace
Here is myy fluentd conf
fluent.conf: |
<match fluent.**>
# this tells fluentd to not output its log on stdout
@type null
</match>
# here we read the logs from Docker's containers and parse them
<source>
@type tail
path /var/log/containers/*nginx-ingress-controller*.log,/var/log/containers/*kong*.log
pos_file /var/log/nginx-containers.log.pos
@label @NGINX
tag kubernetes.*
read_from_head true
<parse>
@type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
<source>
@type tail
path /var/log/containers/*.log
exclude_path ["/var/log/containers/*nginx-ingress-controller*.log", "/var/log/containers/*kong*.log"]
pos_file /var/log/fluentd-containers.log.pos
tag kubernetes.*
read_from_head true
<parse>
@type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
# we use kubernetes metadata plugin to add metadatas to the log
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
<label @NGINX>
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
<filter kubernetes.**>
@type parser
key_name log
reserve_data true
<parse>
@type regexp
expression /^(?<remote>[^ ]*(?: [^ ]* [^ ]*)?) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)"(?: (?<request_length>[^ ]*) (?<request_time>[^ ]*) (?<proxy_upstream_name>[^ ]*(?: \[[^ ]*\])*) (?<upstream_addr>[^ ]*(?:, [^ ]*)*) (?<upstream_response_length>[^ ]*(?:, [^ ]*)*) (?<upstream_response_time>[^ ]*(?:, [^ ]*)*) (?<upstream_status>[^ ]*(?:, [^ ]*)*) (?<req_id>[^ ]*))?)?$/
time_format %d/%b/%Y:%H:%M:%S %z
</parse>
</filter>
<match kubernetes.**>
@type elasticsearch
include_tag_key true
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'https'}"
ssl_verify false
reload_connections false
logstash_prefix k8-nginx
logstash_format true
<buffer>
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever true
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 32
overflow_action block
</buffer>
</match>
</label>
# we send the logs to Elasticsearch
<match kubernetes.**>
@type elasticsearch
include_tag_key true
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'https'}"
ssl_verify false
reload_connections false
logstash_prefix k8-logstash
logstash_format true
<buffer>
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever true
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 32
overflow_action block
</buffer>
</match>