Cannot get FileBeat to post to Elastic Search

7/24/2018

I have a kuberenetes cluster and I am trying to gather logs from the containers in my cluster. I am using Filebeat to collect the logs and send it to elasctic search and then display it in Kibana. I deployed Kibana and elastic search and it works fine. I am using a DaemonSet to deploy FileBeat. Here is the YAML file that I am referencing to deploy Filebeat. I used the manifest file from here to deploy it and modified it a little.

https://www.elastic.co/guide/en/beats/filebeat/master/running-on-kubernetes.html

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.config:
      inputs:
      - type: log
        # Mounted `filebeat-inputs` configmap:
        paths: /var/lib/docker/containers/*/*.log
        # Reload inputs configs as they change:
        reload.enabled: false
        json.message_key: log
        json.keys_under_root: true
    output.elasticsearch:
      hosts: ['x.x.x.x:9200']
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  kubernetes.yml: |-
    - type: docker
      containers.ids:
      - "*"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:6.3.1
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: X.x.x.x
        - name: ELASTICSEARCH_PORT
          value: "9200"
          value:
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: inputs
          mountPath: /usr/share/filebeat/inputs.d
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: inputs
        configMap:
          defaultMode: 0600
          name: filebeat-inputs
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
--- 

I checked the pods running Filebeat and it stores the logs. But somehow it doesn't post it to elastic search. What should be my exact configuration to make it post to elasticsearch. I am stuck on it for a couple of days now and I am out of options. Any help would be greatly appreciated.

-- Anshul Tripathi
elasticsearch
filebeat
kibana
kubernetes

1 Answer

7/24/2018

Your filebeat config is not picking up any input types.

The filebeat.yaml file input path must point to your filebeats-inputs.yaml instead of the logs location. Which in turn delegates to docker input type. Default containers.path is /var/lib/docker/containers.

https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-docker.html

-- Bal Chua
Source: StackOverflow