k8s job completed but the pod shows "OOMKILLED"

5/9/2020

i run a job with 2Gi memory limit which seems to be not enough. the status of the job shows "completed" with 1 succeed

status:
  completionTime: "2020-05-09T03:44:07Z"
  conditions:
  - lastProbeTime: "2020-05-09T03:44:07Z"
    lastTransitionTime: "2020-05-09T03:44:07Z"
    status: "True"
    type: Complete
  startTime: "2020-05-09T03:42:07Z"
  succeeded: 1

whereas the pod has a status of "OOMKILLED"

status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2020-05-09T03:42:07Z"
    reason: PodCompleted
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2020-05-09T03:44:07Z"
    reason: PodCompleted
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2020-05-09T03:44:07Z"
    reason: PodCompleted
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2020-05-09T03:42:07Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://50db639d6f29a56f12b0b878a2a5ee957a9e36fb6dcd089831e4422b85a42e3a
    lastState: {}
    name: fairing-job
    ready: false
    restartCount: 0
    state:
      terminated:
        containerID: docker://50db639d6f29a56f12b0b878a2a5ee957a9e36fb6dcd089831e4422b85a42e3a
        exitCode: 0
        finishedAt: "2020-05-09T03:44:07Z"
        reason: OOMKilled
        startedAt: "2020-05-09T03:42:13Z"
  hostIP: 10.201.1.202
  phase: Succeeded
  podIP: 10.178.140.83
  qosClass: Guaranteed
  startTime: '2020-05-09T07:38:01Z'

raise the memory limit can solve this problem, but i can't figure out how did that happened. And this makes me think that the job status sometimes can be wrong?

-- fresh learning
jobs
kubernetes

0 Answers