We have a situation where Kubernetes is killing Mongo containers when it reaches max memory limit of the container. Even though its expected to K8s to work in that way but feel like Mongo is not reusing its memory as its keep on growing the memory usage day by day even though the user load & transactions are the same so what to check how we can limit the Mongo in reaching max memory of the container or flush Mongo memory at regular intervals.
I have tried to increase the memory which helped the pods to keep running a couple of more days before K8 killing it
"containers": [
{
"name": "mongo",
"image": "dockercentral.com:5870/com.public/mongodb:3.6",
"ports": [
{
"containerPort": 27017,
"protocol": "TCP"
}
]
"resources": {
"limits": {
"cpu": "1",
"memory": "24Gi"
},
"requests": {
"cpu": "250m",
"memory": "24Gi"
}
}
"name": "MONGO_SECURITY",
"value": "true"
}
],
"resources": {
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "150m",
"memory": "256Mi"
}
},
Based on what Stennie from MongoDB, Inc. wrote in comment to question.
this command in kube .yaml works for me:
command:
- "sh"
- "-c"
- >
echo "storage:" >> /etc/mongod.conf;
echo " wiredTiger:" >> /etc/mongod.conf;
echo " engineConfig:" >> /etc/mongod.conf;
echo " cacheSizeGB: 2" >> /etc/mongod.conf;
echo "replication:" >> /etc/mongod.conf;
echo " replSetName: YOUR_REPL_NAME" >> /etc/mongod.conf;
mongod --config /etc/mongod.conf;
also there is a way to set it in runtime:
db.adminCommand( { "setParameter": 1, "wiredTigerEngineRuntimeConfig":"cache_size=2G"})
which also works just fine, but it looks easier through kuberneties yaml file because to issue a command you have to wait until mongo is up and running.
NOTE: make sure that your:
resources:
limits:
memory:
is allowing extra 1G for system.