I created a deployment in which the container always fails, I noticed a new container is automatically created because of the restart policy, but then I am unable to check the log of the failed container. Is there a way to check the log?
You can use the kubectl logs --previous
flag:
--previous If true, print the logs for the previous instance of the container in a pod if it exists.
Example:
kubectl logs my-pod-crashlooping --container my-container --previous