Kubernetes logs not found in default locations?

7/12/2021

In my k8s environment where spring-boot applications runs, I checked log location in /var/log and /var/lib but both empty. Then I found log location in /tmp/spring.log . It seems this the default log location. My problem are

  1. How kubectl log knows it should read logs from /tmp location. I get log output on kubectl logs command.
  2. I have fluent-bit configured where it has input as

following

 [INPUT]
    Name              tail
    Tag               kube.dev.*
    Path              /var/log/containers/*dev*.log
    DB                /var/log/flb_kube_dev.db

This suggest it should reads logs from /var/log/containers/ but it does not have logs. However i am getting fluent-bit logs successfully. What am i missing here ?

-- Viraj
fluent-bit
kubernetes
spring-boot

2 Answers

7/12/2021

Docker logs only contain the logs that are dumped on STDOUT by your container's process with PID 1 (your container's entrypoint or cmd process).

If you want to see the logs via kubectl logs or docker logs, you should redirect your application logs to STDOUT instead of file /tmp/spring.log. Here's an excellent example of how this can achieved with minimal effort.


Alternatively, you can also use hostPath volumeMount. This way, you can directly access the log from the path on the host.

Warning when using hostPath volumeMount

If the pod is shifted to another host due to some reason, you logs will not move along with it. A new log file will be created on this new host at the same path.

-- Raghwendra Singh
Source: StackOverflow

7/12/2021

If you are searching for the actual location of the logs outside the containers (and on the host nodes of the cluster), this depends on a couple things. I suppose you are using Docker to run your containers under Kubernetes, which is the most common setup.

On each node of your Kubernetes cluster, you can use the following command to check what is the logging driver being currently used:

docker info | grep -i logging

The default value should be json-file, which means that logs are being written as jsons from the containers, to a certain location on your host nodes.

If you find another driver, such as for example journald, then that means Docker logging driver is sending logs directly to the systemd journal. There are many logging drivers, so as a first check, you should be sure that all yours Kubernetes nodes are configured to log as json files (or, in the way you need to harvest them).


Once this is done, you can start checking where your containers are logging their own log. Choose a Pod to analyze, then:

Identify on which Kubernetes node it is running on

kubectl get pod pod-name -owide

Grab the container ID with something like the following

kubectl get pod pod-name -ojsonpath='{.status.containerStatuses[0].containerID}'

Where the id should be something in the shape of docker://f834508490bd2b248a2bbc1efc4c395d0b8086aac4b6ff03b3cc8fd16d10ce2c

Remove the docker:// part and SSH on the Kubernetes node on which this container is running, then do a

docker inspect container-id | grep -i logpath

Which should give you the log locations for that particular container. You can try tail on the file to check if the logs are really there or not.


In my case, the container I tried this procedure on, was logging inside:

/var/lib/docker/containers/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63-json.log
-- AndD
Source: StackOverflow