Kubernetes logging based on file inside pod

3/30/2018

I have a Kubernetes cluster having some application pods which are generating multiple log files inside each pod. I want to log those files in a centralized logging solution like elasticsearch. The logs are neither part of stdout/stderr of pod, nor they are mounted as host volume. So basically I need some solution which reads a file from my pod and sends it to elasticsearch or some other logging solution.

Also, I need a solution for same use case but in case of standalone Docker containers not running on Kubernetes.

-- Abhizer Saifee
docker
elasticsearch
kubernetes
logging

2 Answers

3/31/2018

I would consider persistent volume mounting to your pod. Employing a side car container like fluentd, you can forward these log files to elasticsearch. In the case of docker engine, the same volume can be shared between 2 docker process.

-- Bal Chua
Source: StackOverflow

3/31/2018

nor they are mounted as host volume

There's your problem: you will want to expose a path that is a host volume, such that your existing centralized log slurping tool can see them. The only asterisk I know of when doing that is to be mindful of the permissions, which won't be an issue if your process is running as root, but will be if non-root -- you'll need to volume mount a host directory that has the permissions enabling the container process to open files.

But, I've been around long enough to know that there are always "yes, but"s when dealing with containerizing software, so there are two other alternatives you may consider.

If you are using one of the existing log aggregation tools that expects all logging output to appear on stdout and stderr of the container, then you will want to take advantage of the ability of a root process inside the container to write to any file to send the logs to the "pid 1" of the container's stdout and stderr, in one of at least two ways I know of:

log directly to stdout

Some logging frameworks will tolerate being given a "file path" and will cheerfully just open it and begin writing to it:

<log4j:configuration>
  <appender name="DOCKER_STDOUT" class="org.apache.log4j.FileAppender">
    <param name="File" value="/proc/1/fd/1"/>

BTW: that is only an example, I don't know right now if log4j tolerates such a thing

redirect to stdout

Similar to the previous tactic, with the advantage of not requiring an application configuration change, and working 100% of the time; but with the disadvantage of making the in-cluster deployment a lot more complicated.

I had to do this very trick with the kong:0.10 container, because their logging situation wrote only to files, and did not tolerate pointing at the file descriptors like above.

One would need to modify the command: block to launch the application, and then spawn a tail for the in-container log, or use some kind of post-deployment trick to exec in and start a tail:

tail -f /the/inside/file.log > /proc/1/fd/1

choosing to use "tail -F" if the application rotates the file out from underneath tail, and using nohup tail -F ... & if you need to protect the process from termination when the post-deployment exec shell exits

-- mdaniel
Source: StackOverflow