Managing Eviction on Kubernetes for Node.js and Puppeteer

1/22/2021

I am currently seeing a strange issue where I have a Pod that is constantly being Evicted by Kubernetes.

My Cluster / App Information:

  • Node size: 7.5GB RAM / 2vCPU
  • Application Language: nodejs
  • Use Case: puppeteer website extraction (I have code that loads a website, then extracts an element and repeats this a couple of times per hour)
  • Running on Azure Kubernetes Service (AKS)

What I tried:

  • Check if Puppeteer is closed correctly and that I am removing any chrome instances. After adding a force killer it seems to be doing this

  • Checked kubectl get events where it is showing the lines:

8m17s       Normal    NodeHasSufficientMemory   node/node-1              Node node-1 status is now: NodeHasSufficientMemory
2m28s       Warning   EvictionThresholdMet      node/node-1              Attempting to reclaim memory
71m         Warning   FailedScheduling          pod/my-deployment     0/4 nodes are available: 1 node(s) had taint {node.kubernetes.io/memory-pressure: }, that the pod didn't tolerate, 3 node(s) didn't match node selector
  • Checked kubectl top pods where it shows it was only utilizing ~30% of the node's memory
  • Added resource limits in my kubernetes .yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-d
spec:
  replicas: 1
  template:
    spec:
      containers:
      - name: main
        image: my-image
        imagePullPolicy: Always
        resources:
          limits: 
            memory: "2Gi"

Current way of thinking:

A node has X memory total available, however from X memory only Y is actually allocatable due to reserved space. However when running os.totalmem() in node.js I am still able to see that Node is allowed to allocate the X memory.

What I am thinking here is that Node.js is allocating up to X due to its Garbage Collecting which should actually kick in at Y instead of X. However with my limit I actually expected it to see the limit instead of the K8S Node memory limit.

Question

Are there any other things I should try to resolve this? Did anyone have this before?

-- Xavier Geerinck
azure
kubernetes
node.js

1 Answer

1/24/2021

You NodeJS app is not aware that it runs in container. It sees only the amount of memory that Linux kernel reports (which always reports the total node memory). You should make your app aware of cgroup limits, see https://medium.com/the-node-js-collection/node-js-memory-management-in-container-environments-7eb8409a74e8

With regard to Evictions: when you've set memory limits - did that solve your problems with evictions?

And don't trust kubectl top pods too much. It always shows data with some delay.

-- Vasili Angapov
Source: StackOverflow