Logs in Kubernetes for pods which are deployed using deployments

1/23/2018

I will try to explain my problem below,

  1. Create a pod using deployment and then apply one more update on it using kubectl apply -f sampledep.yaml.

  2. Pod name would have changed if we do kubectl get pods.

  3. So whatever logs we had in our previous pod no longer exist or can be retrieved.

I will list down the questions below now:

  1. Is there a way to retrieve old pod logs for that application?
  2. Is there a way to configure size of logs that can be accumulated for a pod?
  3. What happens to pod if log gets accumulated too much and there is no space left in it!
  4. What is recommended way to view/manage logs in kubernetes for pods deployed?
-- Anil Kumar P
kubernetes
logging

4 Answers

4/7/2019

When the pods are crashed or restarted, the previous container logs still stays in /var/log/docker/ containers location. You will be able to retain the logs by using

kubectl logs pod-name -p container-name
-- Navaneeth pasupathi
Source: StackOverflow

11/18/2019
  1. Is there a way to retrieve old pod logs for that application?

Not if the Pod is deleted.

  1. Is there a way to configure size of logs that can be accumulated for a pod?

Yes, but Kubernetes does not handle log rotation. So you'd have to implement a solution using something like logrotate

  1. What happens to pod if log gets accumulated too much and there is no space left in it!

Then you run out of disk space! This is also why it's important to implement monitoring and alerting, so you can do something about it when the disk space on your Kubernetes node is running low.

  1. What is recommended way to view/manage logs in kubernetes for pods deployed?

Your container runtime will capture the logs from the container and store them at /var/lib/docker/containers (for Docker). Kubernetes (kubelet to be more specific) creates symlinks to these log files at /var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/. You should use a log shipper like Filebeat, fluentd, or Fluent Bit to watch for changes in the /var/log/pods/ or /var/log/containers/ directory, and push the log entries to a centralized location like a Kafka stream, Elasticsearch, or some other form of persistence.

-- d4nyll
Source: StackOverflow

1/23/2018
  1. Try to run kubectl get pods --show-all. If you can find your pod there you can just use kubectl logs <pod name>. If not, I don't think you can retrieve the logs anymore.

  2. The recommended way to manage logs in k8s is using an addon like fluentd-elasticsearch. This way, you never save logs on the pod FS itself, you just print logs from your container to STDOUT and fluentd will automatically ship the logs to Elasticsearch, which you can later on interrogate with Kibana. There's no need to limit accumulated logs for a pod since it is never accumulated on the pods itself.

-- Erez Rabih
Source: StackOverflow

1/23/2018

Agree with the fluentd-elasticsearch suggestion provided by Erez. To add to his you should also be able to do kubectl logs --previous <pod name>

https://kubernetes.io/docs/concepts/cluster-administration/logging/

-- Derek Lemon
Source: StackOverflow