MongoDb RAM Usage in Kubernetes Pods - Not Aware of Node limits

3/5/2018

In Google Container Engines Kubernetes I have 3 Nodes each having 3.75 GB of ram

Now i also have an api that is called from a single endpoint. this endpoint makes batch inserts in mongodb like this.

IMongoCollection<T> stageCollection = Database.GetCollection<T>(StageName);

foreach (var batch in entites.Batch(1000))
{
  await stageCollection.InsertManyAsync(batch);
}

Now it happens very often then we endup in scenarios out ouf memory scenarios.

On the one hand we limited the wiredTigerCacheSizeGB to 1.5 and on the other hand we defined a ressource limit for the pod.

But still the same result. For me it looks like mongodb isn't aware of the memory limit the node pod has. Is this a known issue? how to deal with it, without scaling to "monster" engines?

the configuration yaml looks like this:

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
---
apiVersion: v1
kind: Service
metadata:
  name: mongo
  labels:
    name: mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  clusterIP: None
  selector:
    role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo
spec:
  serviceName: "mongo"
  replicas: 1
  template:
    metadata:
      labels:
        role: mongo
        environment: test
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo:3.6
          command:
            - mongod
            - "--replSet"
            - rs0
            - "--bind_ip"
            - "0.0.0.0"
            - "--noprealloc"
            - "--wiredTigerCacheSizeGB"
            - "1.5"
          resources:
            limits:
              memory: "2Gi"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: "role=mongo,environment=test"
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent-storage
      annotations:
        volume.beta.kubernetes.io/storage-class: "fast"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 32Gi

UPDATE

in the meanwhile i also configured the pod antiaffinity to make sure that on the nodes where mongo db is running we don't have any interference in ram. but still we got the oom scenarios –

-- Boas Enkler
google-kubernetes-engine
kubernetes
mongodb

0 Answers