Frequent Heap Out of memory in node.js application inside a docker container

11/16/2019

My node.js application has 16 microservices a docker Image and hosted in google cloud platform with kubernetes.

But only for 100 user's api request, some main docker images are getting crashed due to heap out of memory - javascript.

I checked those images, and that has 1.4 Gb heap memory limit for node.js. But it's getting fully used very soon for a low amount of API traffic also.

How to manage/allocate heap memory docker/kubernetes for node.js ? Alternatively, is there any way find out where the memory leak is happening ?

-- Jana
docker
heap-memory
javascript
kubernetes
node.js

1 Answer

11/20/2019

From kubernetes point of view you should consider concept of Managing Compute Resources for Containers:

When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled Containers is less than the capacity of the node.

The spec.containers[].resources.limits.memory is converted to an integer, and used as the value of the --memory flag in the docker run command.

A Container can exceed its memory request if the Node has memory available. But a Container is not allowed to use more than its memory limit. If a Container allocates more memory than its limit, the Container becomes a candidate for termination. If the Container continues to consume memory beyond its limit, the Container is terminated.

Why to use memory limits:

The Container has no upper bound on the amount of memory it uses. The Container could use all of the memory available on the Node where it is running which in turn could invoke the OOM Killer.

As an example of using resource requests and limits:

apiVersion: v1
kind: Pod
metadata:
  name: memory-demo-2
  namespace: mem-example
spec:
  containers:
  - name: memory-demo-2-ctr
    image: polinux/stress
    resources:
      requests:
        memory: "50Mi"
      limits:
        memory: "100Mi"
    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "250M", "--vm-hang", "1"]

To find out more infomration about your pods/containers state you can use:

kubectl describe pod your_pod 
If metric server was installed:
kubectl top pod your_pod ## to see memory usage.

From node.js perspective probably you will be interested with:

Hope this help.

Additional resources:

-- Hanx
Source: StackOverflow