We had a system outage, service was unresponsive and I restarted the service with kubectl rollout restart sts myservice
and it worked. However, I want to look at the logs to see a cause of the problem. When I try kubect logs --previous myservice-0
it says 'previous terminated container "mycontainer" in pod "myservice-0" not found'. Is there a way to find the logs before the restart? I tried to look at the dead docker containers (docker ps -a
), there are containers exited 6 month ago, but no recently exited containers of my service, why is so?
I suggest the following reading: The Complete Guide to Kubernetes Logging:
In Kubernetes, when pods are evicted, crashed, deleted, or scheduled on a different node, the logs from the containers are gone. The system cleans up after itself. Therefore you lose any information about why the anomaly occurred.
Also, as per Logging Architecture:
If you want to access the application's logs if a container crashes; a pod gets evicted; or a node dies, ... you need a separate backend to store, analyze, and query logs. Kubernetes does not provide a native storage solution for log data. Instead, there are many logging solutions that integrate with Kubernetes.
Some example of those log aggregation solutions are: