I've configured a CronJob with a YAML file (apiVersion: batch/v1beta1) with resource requests and limits, the pod is successfully instantiated and works till its natural end, but when terminates I saw it marked as OOMKilled.
My pod is working with a Docker container, which is started with a bash script that invokes some Java tools (like maven). I tried to setup a MAVEN_OPTS and JAVAOPTS like this:
env:
- name: JAVA_OPTS
value: "-Xms256M -Xmx1280M"
- name: MAVEN_OPTS
value: "-Xms256M -Xmx1280M"
That are values lower than the ones of the configured limit in the YAML.
I'd expected that the pod stops in complete status as the last echo of my ENTRY_POINT bash script is shown in the log of the pod but I get the OOMKilled.
When Containers have resource requests specified, the scheduler can make better decisions about which nodes to place Pods on. But keep in mind: Compute Resources (CPU/Memory) are configured for Containers, not for Pods.
If a Pod container is OOM killed, the Pod is not evicted. The underlying container is restarted by the kubelet
based on its RestartPolicy
.
Your container being terminated by OOMKill does not imply the pod to become in a Completed/Error
status (unless you're using the RestartPolicy: Never
).
If you do a kubectl describe
on your pod, the container will be in Running
state, but you can find the last restart cause in Last State
. Also, you can check how many times it was restarted:
State: Running
Started: Wed, 27 Feb 2019 10:29:09 +0000
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Wed, 27 Feb 2019 06:27:39 +0000
Finished: Wed, 27 Feb 2019 10:29:08 +0000
Restart Count: 5