fluentd to elasticsearch via kubernetes-ingress

11/6/2019

I have configured ElasticSearch on a Kubernetes cluster. In Kubernetes cluster for application, I have fluentd configured, using THIS helm chart, with following parameters:

    spec:
      containers:
      - env:
        - name: FLUENTD_ARGS
          value: --no-supervisor -q
        - name: OUTPUT_HOST
          value: x.x.x.x
        - name: OUTPUT_PORT
          value: "80"
        - name: OUTPUT_PATH
          value: /elastic
        - name: LOGSTASH_PREFIX
          value: logstash
        - name: OUTPUT_SCHEME
          value: http
        - name: OUTPUT_SSL_VERIFY
          value: "false"
        - name: OUTPUT_SSL_VERSION
          value: TLSv1_2
        - name: OUTPUT_TYPE_NAME
          value: _doc
        - name: OUTPUT_BUFFER_CHUNK_LIMIT
          value: 2M
        - name: OUTPUT_BUFFER_QUEUE_LIMIT
          value: "8"
        - name: OUTPUT_LOG_LEVEL
          value: info

In ElasticSearch cluster I have nginx-ingress controller configured and I want fluentd to send logs to Elasticsearch via this nginx ingress. In "OUTPUT_HOST" I am using nginx-ingress public IP. In "OUTPUT_PORT" I have used "80" as nginx is listening on 80.

I am getting following error in fluentd:

2019-11-06 07:16:46 +0000 [warn]: [elasticsearch] failed to flush the buffer. retry_time=40 next_retry_seconds=2019-11-06 07:17:18 +0000 chunk="596a7f6afffad60f2b28a5e13f" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"x.x.x.x\", :port=>80, :scheme=>\"http\", :path=>\"/elastic\"}): [405] {\"error\":\"Incorrect HTTP method for uri [/] and method [POST], allowed: [HEAD, GET, DELETE]\",\"status\":405}"

I can guess by looking at the log, it is considering "/elastic" as index.

A mentioned HERE is used the annotation "nginx.ingress.kubernetes.io/rewrite-target: /" but problem persists.

After this I changed nginx-ingress to listen to calls at "/" instead of "/elastic". changed "OUTPUT_PATH" in fluentD config too.

I could see the error I was getting earlier is gone but I would still like to use "/elastic" instead of "/". I am not sure what nginx config I need to change to achieve this. Please help me here.

After this I got "request entity too large" error which was resolved by adding - "nginx.ingress.kubernetes.io/proxy-body-size: 100m" in annotations. By default its 1M and for fluentD by default its 2M. It was bound to fail.

Now I am getting errors like:

2019-11-06 10:01:08 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kernel:default" location=nil tag="kernel" time=2019-11-06 10:01:08.267224927 +0000 record={"transport"=>"kernel", "syslog_facility"=>"0", "syslog_identifier"=>"kernel", "boot_id"=>"6e4ca7b1c1a11b74151a12979", "machine_id"=>"89436ac666fa120304f2077f3bf2", "priority"=>"6", "hostname"=>"gke-dev--node-pool", "message"=>"cbr0: port 9(vethe75a241b) entered disabled statedevice vethe75a241b left promiscuous modecbr0: port 9(vethe75a241b) entered disabled stateIPv6: ADDRCONF(NETDEV_UP): veth630f6cb0: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): veth630f6cb0: link becomes readycbr0: port 9(veth630f6cb0) entered blocking statecbr0: port 9(veth630f6cb0) entered disabled statedevice veth630f6cb0 entered promiscuous modecbr0: port 9(veth630f6cb0) entered blocking statecbr0: port 9(veth630f6cb0) entered forwarding state", "source_monotonic_timestamp"=>"61153348174"}

Can someone help me with this ?

-- Stunn3r
elasticsearch
fluentd
kubernetes
nginx-ingress

1 Answer

11/7/2019
  1. Regarding the nginx config: here is an official documentation regarding the Rewrite. You can adjust it to your needs alongside OUTPUT_PATH in fluentD config as you already mentioned.

  2. Regarding the error event: The timeout flush is just indication that a flush has happend. Use the timeout_label to process entries where the flush has occurred. It's usually better to dispatch the message instead of emitting an error event.

Please let me know if that helped.

-- OhHiMark
Source: StackOverflow