I have setup elastic stack on kubernetes private cloud and I am running filebeat on the K8 nodes. Filebeat sends logs of some of the containers to logstash which are eventually seen on Kibana but some container logs are not shown because they are probably not harvested in first place. What is the mistake I am doing?
Filebeat is able to read from paths such as /var/lib/docker/containers/7a36cc887cc4ba1cea8ebedcf5ed8c74fee9e6cd307bac5e1ba795d07369ca2d/7a36cc887cc4ba1cea8ebedcf5ed8c74fee9e6cd307bac5e1ba795d07369ca2d-json.log. I have jupyterhub, cassandra, sftp services running on my K8 cluster. The logs that I see on doing kubectl logs -f are fetched by filebeat as well but there are some user applications running on my K8 cluster as well. The logs which I see on doing kubectl logs -f are not being fetched by filebeat.
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-logging
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.logstash:
hosts: ['logstash-service:5044']
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: kube-logging
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
I want logs from all of my containers to be fetched by filebeat and shown in Kibana. How can achieve this, what is the missing link?