The scenario is we run some web sites based on an nginx image. When we had our cluster setup with nodes of 2cores and 4GB RAM each. The pods had the following configurations, cpu: 40m and memory: 100MiB. Later, we upgraded our cluster with nodes of 4cores and 8GB RAM each. But kept on getting 00MKilled in every pod. So we increased memory on every pods to around 300MiB and then every thing seems to be working fine.
My question is why does this happen and how do I solve it. P.S. if we revert back to each node being 2cores and 4GB RAM, the pods work just fine with decreased resources of 100MiB.
Any help would be highly appreciated. Regards.
For each container in kubernetes you can configure resources for both cpu and memory, like following
resources:
limits:
cpu: 100m
memory: "200Mi"
requests:
cpu: 50m
memory: "100Mi"
According to documentation
When you specify the resource request for Containers in a Pod, the scheduler uses this information to decide which node to place the Pod on. When you specify a resource limit for a Container, the kubelet enforces those limits so that the running container is not allowed to use more of that resource than the limit you set.
So if you set memory: 100MiB
on resources:limits
and your container consume more than 100MiB
memory then you will get out of memory (OOM)
error
For more details about request and limits on resources click here