Automatic restart of a Kubernetes pod

1/25/2019

I have a Kubernetes cluster on Google Cloud Platform. The Kubernetes cluster contains a deployment which has one pod. The pod has two containers. I have observed that the pod has been replaced by a new pod and the entire data is wiped out. I am not able to identify the reason behind it.

I have tried the below two commands:

  1. kubectl logs [podname] -c [containername] --previous

**Result: ** previous terminated container [containername] in pod [podname] not found

  1. kubectl get pods

Result: I see that the number of restarts for my pod equals 0.

Is there anything I could do to get the logs from my old pod?

-- ilegolas
google-cloud-platform
kubectl
kubernetes

2 Answers

1/27/2019

Try below command to see the pod info kubectl describe po

-- P Ekambaram
Source: StackOverflow

1/28/2019

Not many chances you will retrieve this information, but try next:

1) If you know your failed container id - try to find old logs here

/var/lib/docker/containers/<container id>/<container id>-json.log

2) look at kubelet's logs:

journalctl -u kubelet
-- VKR
Source: StackOverflow