kubernetes log management via filebeat to logstash

7/31/2018

Im in the process of migrating some services to kubernetes and Im looking into a solution for logfile management. Said services can be rather noisy (log-speaking) and as they will be in various containers I plan to centralize the logs using logstash->elasticsearch.

If these were VMs id probably setup filebeat to grab logs and set logrotate to be pretty firm about not letting any files get more than a day old. Is this the appropriate mechanism for kubernetes? I don't want to cause problems on the kube hosts by filling up disks.

In essence: Each container would have a service + filebeat + strict logrotate policy. Logs would be forwarded to a central logstash instance.

Is that reasonable?

-- ethrbunny
filebeat
kubernetes
logstash

1 Answer

8/1/2018

There are no restriction to use any kind of log aggregators. You can use service + filebeat + strict logrotate policy if it is applicable for you. Also, as an alternative you can use fluend-bit as a log aggregator and than send it to ELK.

In official documentation you could find good how to how to use ELK + fluend.

-- Nick Rak
Source: StackOverflow