Elasticsearch/client.go:408 Cannot index event

2/15/2022
2022-02-14T15:46:03.114Z WARN [elasticsearch] elasticsearch/client.go:408 Cannot index event
 publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xbaeb900, ext:63780449718, 
loc:(*time.Location)(0x5864aa0)}, Meta:null, Fields:{"agent":{"ephemeral_id":"039466b4-e76f-
4ec2-ac5d-d64aac166def","hostname":"metricbeat-elk-poc-metricbeat-metrics-5b7d5cbbc7-
rgztt","id":"61ca742f-27bf-4073-a2c7-40d70f7a7ce5","name":"metricbeat-elk-poc-metricbeat-
metrics-5b7d5cbbc7-rgztt","type":"metricbeat","version":"7.10.2"},"ecs":{"version":"1.6.0"},"event":
{"dataset":"prometheus.remote_write","module":"prometheus"},"host":{"name":"metricbeat-elk-
poc-metricbeat-metrics-5b7d5cbbc7-rgztt"},"metricset":{"name":"remote_write"},"prometheus":
{"labels":{"app":"gatekeeper","chart":"gatekeeper","control_plane":"audit-
controller","gatekeeper_sh_operation":"audit","gatekeeper_sh_system":"yes","heritage":"Helm","instance":"10.6.72.104:8888","job":"kubernetes-pods","namespace":"kube-
operations","pod":"gatekeeper-audit-64d4c866b4-
q44zf","pod_template_hash":"64d4c866b4","release":"opa-gatekeeper"},"metrics":
{"go_memstats_frees_total":133965610278.000000,"process_virtual_memory_max_bytes":18446744073709551616.000000}},"service":{"type":"prometheus"}}, Private:interface {}(nil), TimeSeries:true}, Flags:0x0, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [prometheus.metrics.process_virtual_memory_max_bytes] of type [long] in document with id 'sZrp-H4BNkkcHW7KjvtZ'. Preview of field's value: '1.8446744073709552E19'","caused_by":{"type":"input_coercion_exception","reason":"Numeric value (1.8446744073709552e+19) out of range of long (-9223372036854775808 - 9223372036854775807)\n at [Source: (byte[])\"{\"create\":{\"_index\":\"prom-elk-poc-2022.02.14\"}}\n{\"@timestamp\":\"2022-02-14T15:35:18.196Z\",\"ecs\":{\"version\":\"1.6.0\"},\"prometheus\":{\"metrics\":{\"controller_runtime_reconcile_time_seconds_bucket\":38848},\"labels\":{\"app\":\"gatekeeper\",\"pod\":\"gatekeeper-audit-64d4c866b4-q44zf\",\"gatekeeper_sh_operation\":\"audit\",\"heritage\":\"Helm\",\"release\":\"opa-gatekeeper\",\"chart\":\"gatekeeper\",\"pod_template_hash\":\"64d4c866b4\",\"controller\":\"constraint-controller\",\"job\":\"kubernetes-pods\",\"control_plane\":\"audit-controller\",\"i\"[truncated 80320 bytes]; line: 1, column: 168]"}}

Getting this error when being pushed from Prometheus.

the index settings:

 "prom-elk-poc-2022.02.14" : {
    "settings" : {
      "index" : {
        "mapping" : {
          "nested_fields" : {
            "limit" : "10000"
          },
          "total_fields" : {
            "limit" : "10000"
          },
          "depth" : {
            "limit" : "500"
          }
        },
        "number_of_shards" : "5",
        "blocks" : {
          "read_only_allow_delete" : "false",
          "write" : "false"
        },
        "provided_name" : "prom-elk-poc-2022.02.14",
        "creation_date" : "1644837715451",
        "number_of_replicas" : "1",
        "uuid" : "599xUzfmTRKgyidapOBFiw",
        "version" : {
          "created" : "135227827"
        }
      }
    }
  }

metricbeat config - !UPDATED!

  metricbeatConfig:
    metricbeat.yml: |
      metricbeat.modules:
        - module: prometheus 
          metricsets: ["remote_write"] 
          host: "0.0.0.0" 
          port: "9201"
          use_types: true
          rate_counters: false
      output.elasticsearch:
        hosts: ${ELASTICSEARCH_HOSTS}
        index: "metricbeat-%%{+yyyy.MM.dd}"
        worker: 4
      setup.template.name: "metricbeat"
      setup.template.pattern: "metricbeat-*"
      setup.ilm.enabled: false

Any help is highly appreciated. Thanks in advance.

-- qubsup
elastic-stack
elasticsearch
kubernetes
metricbeat
prometheus

1 Answer

2/15/2022

The error states

failed to parse field prometheus.metrics.process_virtual_memory_max_bytes of type long in document with id 'sZrp-H4BNkkcHW7KjvtZ'. Preview of field's value: '1.8446744073709552E19'

and

Numeric value (1.8446744073709552e+19) out of range of long (-9223372036854775808 - 9223372036854775807)

So the problem is that your index mapping has a field called prometheus.metrics.process_virtual_memory_max_bytes mapped as long but the values are doubles. This can arise when the field is not explicitly mapped and the first document that creates the index has that field with a long value (usually 0), which is dynamically mapped to long.

The underlying issue is that you have an index template configured for indexes whose name matches metricbeat-oss-* but the index you're sending the data to is called prom-elk-poc so the Metricbeat prometheus index template is not applied correctly. You need to modify setup.template.pattern to match your index name.

-- Val
Source: StackOverflow