How to limit memory size for .net core application in pod of kubernetes?

4/6/2019

I have a kubernetes cluster with 16Gb RAM on each node

And a typical dotnet core webapi application

I tried to configure limits like here:

apiVersion: v1
kind: LimitRange
metadata:
  name: mem-limit-range
spec:
  limits:
  - default:
      memory: 512Mi
    defaultRequest:
      memory: 256Mi
    type: Container

But my app believe that can use 16Gb

Because cat /proc/meminfo | head -n 1 returns MemTotal: 16635172 kB (or may be something from cgroups, I'm not sure)

So.. may be limit does not work?

No! K8s successfully kills my pod when it's reaches memory limit

.net core have interesting mode of GC, more details here. And it is a good mode, but looks like it's doesn't work solution for k8s, because application gets wrong info about memory. And unlimited pods could get all host memory. But with limits - they will die.

Now I see two ways:

  1. Use GC Workstation
  2. Use limits and k8s readness probe: handler will checks current memory on each iteration and call GC.Collect() if current used memory near 80% (I'll pass limit by env variable)

How to limit memory size for .net core application in pod of kubernetes?

How to rightly set limits of memory for pods in kubernetes?

-- SanŚ́́́́Ý́́́́Ś́́́́
.net-core
asp.net-core
asp.net-web-api
kubernetes

1 Answer

4/6/2019

You should switch to Workstation GC for optimizing to lower memory usage. The readiness probe is not meant for checking memory

In order to properly configure the resource limits you should test your application on a single pod under heavy loads and monitor(e.g. Prometheus & Grafana) the usage. For a more in-depth details see this blog post. If you haven't deployed a monitor stack you can at least use kubectl top pods.

If you have found out the breaking points of a single pod you can add the limits to the specific pod like this example below (see Kubernetes Documentation for more examples and details)

apiVersion: v1
kind: Pod
metadata:
  name: exmple-pod
spec:
  containers:
  - name: net-core-app
    image: net-code-image
    resources:
      requests:
        memory: 64Mi
        cpu: 250m
      limits:
        memory: 128Mi
        cpu: 500m

The readiness probe is actually meant to be used to tell when a Pod is ready in first place. I guess you thought of the liveness probe but that wouldn't be adequate usage because Kubernetes will kill the Pod when it's exceeding it's resource limit and reschedule.

-- tomaaron
Source: StackOverflow