How can I aggregate the log events to a single entry even though it is logged in multiple lines through application logger when docker is deployed to gcp kubernetes cluster.
For AWS we can use the date time format to identify the start of an event. What is the substitute in GCP.
Thanks.
In my opinion, you need a dedicated solution to manage your logs really effectively.
One of the most popular solutions for aggregating/managing/sharing logs is ELK stack, i.e. ElasticSearch, Logstash, Kibana
or another version of similar stack, but with Fluentd
instead of Logstash
: EFK stack
.
ELK Stack has a list of streamers or "data shippers", which names are beats
. One of them is Filebeat
, which unsurprisingly works with files. In the nutshell it can read a file via tail
method. So you can read any file.
The Filebeat
supports configuration options to resolve your issue. They are:
multiline.pattern:
multiline.negate:
multiline.match:
Generally, you should define regular expression which is describing the beginning of your lines unequivocally.
So, try this out. This stack supports different types of integration with the Kubernetes, e.g. in-cluster deployment and autodiscover