I have a pod with multiple containers and one of them (containerA) exits with error:
Containers:
containerA:
......
State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 27 Sep 2019 16:21:53 -0700
Finished: Fri, 27 Sep 2019 16:21:53 -0700
Ready: False
Restart Count: 0
containerB:
......
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 27 Sep 2019 16:21:54 -0700
Finished: Fri, 27 Sep 2019 16:21:59 -0700
Ready: False
Restart Count: 0
containerC:
......
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 27 Sep 2019 16:21:54 -0700
Finished: Fri, 27 Sep 2019 16:21:58 -0700
Ready: False
Restart Count: 0
This pod has restartPolicy: Never
. And it's been controlled by a Job with backoffLimit: 9
. During all the attempts, the pod status is shown as:
NAME READY STATUS RESTARTS AGE
my-pod-2scsn 0/4 Completed 0 3d18h
my-pod-8z7qq 0/4 Completed 0 3d18h
my-pod-9cjnc 0/4 Completed 0 3d18h
my-pod-f6hxr 0/4 Completed 0 3d18h
my-pod-fz7hk 0/4 Completed 0 3d18h
.....
This Completed
is confusing here (one of the containers exits with Error). Why the pod status is Completed
here?
According to the official Kubernetes documentation, Job treats a Pod failed once any of entire containers quit with a non-zero exit code or for some resources overlimits detected. Due to this fact, Pod phase actually is the main indicator in terms of generic Pod lifecycle, telling initial Job about most recent Pod status.
However, what I have observed, that kubectl get pod
output STATUS
column doesn't display Pod phase status, instead it retrieves a value for a particular container inside a Pod and uses .status.containerStatuses.state.terminated.reason
field to display STATUS
column data.
In fact, you can even get more informative output, supplying custom columns to the standard kubectl
command-line tool:
$ kubectl get po -o=custom-columns=NAME:.metadata.name,PHASE:.status.phase,CONTAINERS:.spec.containers[*].name,STATUS:.status.containerStatuses[*].state.terminated.reason
Since Job reaches backoffLimit: 9
count, it will terminate with BackoffLimitExceeded
warning message:
Warning BackoffLimitExceeded 54m job-controller Job has reached the specified backoff limit