When we are trying to extract a large table from a sql server, we are getting an error:
Containerized process terminated by signal 119.
As per my understanding, kubernetes containers have a limit of how many GB is allocated to memory for each POD. So suppose if we have a limitation on memory and the table size is expected to be larger then what is the option we have?
A Container can exceed its memory request if the Node has memory available. But a Container is not allowed to use more than its memory limit. If a Container allocates more memory than its limit, the Container becomes a candidate for termination. If the Container continues to consume memory beyond its limit, the Container is terminated. If a terminated Container can be restarted, the kubelet restarts it, as with any other type of runtime failure. <sup>[source]</sup>
There are two possible reasons:
spec.containers[].resources.limits.memory
field; orIn the first case you can increase memory limit by changing spec.containers[].resources.limits.memory
value.
In the second case you can either increase node's resources or make sure the pod is being scheduled on a node with more available memory.