K8s pod memory higher than process requires

12/3/2018

I have node.js application that is logging inside memory usage.

rss: 161509376, 
heapTotal: 97697792, 
heapUsed: 88706896, 
external: 733609

And command kubectl top pod which saying how many memory pod is using.

NAME                              CPU(cores)   MEMORY(bytes)
api-596d754fc6-s7xvc              2m           144Mi 

As you can see, node app using only 93 MB of memory, while k8s saying that pod consumes 144 MB of memory.

We are using alpine as a base image for the node.js app. I checked the raw alpine image with all dependencies installed without actual application running, and it consumed about 4-8 MB of memory. Deployment has limits set.

...
resources:
  limits:
    memory: 400Mi
    cpu: 2
  requests:
    memory: 90Mi
    cpu: 100m

So, requested memory is lower than one that k8s showing to me. I expect to see, that there would be something closer to actual memory consumption, let's say 100 MB.

How can I understand where this additional memory come from? Why are these numbers having a difference?

All tests have been launched on a single pod (single service has a single pod, no mistakes here).

Update 1.

FROM node:8-alpine

ENV NODE_ENV development
ENV PORT XXXX

RUN echo https://repository.fit.cvut.cz/mirrors/alpine/v3.8/main > /etc/apk/repositories; \
    echo https://repository.fit.cvut.cz/mirrors/alpine/v3.8/community >> /etc/apk/repositories

RUN apk update && \
    apk upgrade && \
    apk --no-cache add git make gcc g++ python

RUN apk --no-cache add vips-dev fftw-dev build-base \
    --repository https://repository.fit.cvut.cz/mirrors/alpine/edge/testing/ \
    --repository https://repository.fit.cvut.cz/mirrors/alpine/edge/main

WORKDIR /app

COPY ./dist /app

RUN npm install --only=production --unsafe-perm

RUN apk del make gcc g++ python build-base && \
    rm /var/cache/apk/*

EXPOSE XXXX

CMD node index.js

Docker image looking like so.

-- QuestionAndAnswer
alpine
docker
kubernetes
node.js

2 Answers

12/3/2018

So the only other aspect that consumes memory in Node.js (and other language runtimes) is the garbage collector. You didn't describe that whether you see an upwards consumption pattern but if you do, it's possible that you have some sort of leak. If your consumption remains stable it's possible that's the given amount consumed by the garbage collector for your specific application. For more information on when what the garbage collector is doing you can use the node-gc-profiler.

This blog sheds light on the Node.js, memory consumption and garbage collection. There are also a ton of online resources on how to troubleshoot Node.js memory usage and garbage collection. [1], [2], [3], etc.

Hope it helps!

-- Rico
Source: StackOverflow

9/24/2019

You are most likely seeing a merge of the pod actual memory consumption and the buffer/cache memory that the kernel use for file caching.

This is probably a bug in Kubernetes itself. See this issue : https://github.com/kubernetes/kubernetes/issues/43916

This will happen if your pod read/write files. If your pod ever reach the limit, the kernel will wipe it's buffer-cache before oomkiller activate so it's not too dangerous to set a hard limit. If you do not set a limit, the node will eventually stop scheduling pods or even restart pods that are consuming too much "memory".

-- Nico
Source: StackOverflow