How to access from one container to another container stdout and stderr inside Kubernetes pod

7/17/2019

I have a Pod with two containers.

apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - name: my-container
    image: google/my-container:v1
  - name: third-party
    image:  google/third-party:v1

One container is my image and the second is third-party image which I can’t control its stdout/stderr.
I need that my-container will access logs written in third-party container.
Inside "my-container" I want to collect all the stdout and stderr from the "third-party" container, add some metadata and write it with my logger.

I cant use a privileged container with volumeMounts.

If I could do something like this it was great.

 containers:
  - name: my-container
    image: google/my-container:v1
    volumeMounts:
    - name: varlog
      mountPath: /var/log

  - name: third-party
    image:  google/third-party:v1 
    stdout: /var/log/stdout
    stderr: /var/log/stderr

 volumes:
  - name: varlog
    emptyDir: {}
-- nassi.harel
fluentd
kubernetes
logging
openshift

3 Answers

7/18/2019

Based on the logging driver specified for docker, docker tracks the containers' logs. The default logging driver of docker is json-file which redirect the container's stdout and stderr logs to /var/log/containers folder in the host machine that runs docker.

In case of kubernetes, the logs will be available in the worker nodes /var/log/containers folder.

Probably, what you are looking for is fluentd daemonset, that creates a daemonset, which runs in each worker node and then help you move the logs to s3, cloudwatch or Elastic search. There are many sinks provided with fluentd. You can use one that suits your needs. I hope this is what you want to do with your my-container.

-- Malathi
Source: StackOverflow

7/17/2019

As containers inside a pod share the same persistence layer, you can mount a Shared Volume to make that data accesible for both.

For your specific purpose, you'll need to log both streams (stderr and stdout) into files in the volume. Then, you need to export them from the main container to the whichever logging driver you're running in your cluster.

There is no specific instruction to write these outputs into a file in the specs though.

-- yyyyahir
Source: StackOverflow

2/6/2020

I think I understood your requirement. I stumbled upon Logspout: https://github.com/gliderlabs/logspout

Do

$ docker pull gliderlabs/logspout:latest

and then run the container like,

$ docker run \
--volume=/var/run/docker.sock:/var/run/docker.sock \
gliderlabs/logspout \
raw://192.168.10.10:5000

It then attaches to all the containers on a host, then routes their logs wherever you want.

Check the link above for details.

-- rAhulD
Source: StackOverflow