I am currently attempting to get the logs from my kubernetes cluster to an external ElasticSearch/Kibana. So far I have used this daemon deployment to get filebeat running and piping to my external server, but I am unable to figure out how to set the index to something meaningfull. This documentation page tells me that I need to create am index key in the output.elasticsearch section, but I don't know what to put in the value.
My desired output format would be something along the lines of <cluster-name>-<namespace>-<pod name>
IE: devKube-frontend-publicAPI-123abc
Precondition: You have enabled add_kubernetes_metadata: ~
.
Then you can use that metadata in the index name like this:
output.elasticsearch:
index: "%{[kubernetes.namespace]:filebeat}-%{[beat.version]}-%{+yyyy.MM.dd}"
%{[kubernetes.namespace]:filebeat}
: Use the Kubernetes namespace or if there is none fall back to filebeat
.%{[beat.version]}
is highly recommended for the scenario when you upgrade Filebeat and there is a breaking change in the mapping. This should be limited to major version changes (if at all), but is an issue you can easily avoid with this setting.%{+yyyy.MM.dd}
or even better would be an ILM policy to have evenly and properly sized shards.PS: You have the Pod name and other details in fields from add_kubernetes_metadata: ~
. I would be careful not to cut the indices into tiny pieces, since every shard has a certain amount of overhead. The default Filebeat ILM policy is 50GB per shard — if your shards are smaller than 10GB you will most likely run into issues at some point. Leave the indices a bit more coarse grained and just use a filter for a specific Pod instead.