Logs from docker when using kubernetes

3/19/2020

I've finally managed to run my containers and let them communicate. Currently, they're 1-1 (1 frontend, 1 backend). Now I wish to have n instances of frontend and m instances of the backend, but a question came to me, about handling the logs. If I run only 1 instance of each, I can configure 2 volumes (1 for frontend and 1 for backend) and have them write there. When I have the containers orchestrated by Kubernetes, how can I set the volumes so that node1 of frontend wont' overwrite data written by node2 (frontend)

Thanks

-- advapi
kubernetes

2 Answers

3/19/2020

You don't write logs to a volume, generally. You write them to stdout/err and then the container runtime system manages them for you. You can then access them via kubectl logs or ship them elsewhere using tools like Fluentd.

-- coderanger
Source: StackOverflow

3/19/2020

This is a general problem that you need to solve in all persistent distributed systems, and it is not restricted to logs -- it applies to any persistent data. It gets worse when you have more than 1 node and the transaction starts on node 1 front-end service and then gets passed to node 2 backend service. Then your transaction is split across multiple nodes.

The most common solution for logging is to ship your logs to a centralised logging service that isn't on the same node. Shipping the logs is often done with fluentd, though there are plenty of log-shippers. You'll then need an aggregator, for which again there are many. Lots of people use elastic-search, or commercially Splunk is very popular.

-- Software Engineer
Source: StackOverflow