MongoDB restoring seemingly stuck/ not progressing

3/2/2022

After a mongodump I am trying to restore using mongorestore.

It works locally in seconds. However, when I kubectl exec -it into the pod of the primary mongodb node and run the same command it gets stuck and endlessly repeats the line with the same progress and an updated timestamp (the first and the last line are the same except the timestamp, so 0 progress). This goes about 5 hours, then I get thrown out with an OOM error.

I am using mongo:3.6.9

2022-03-02T22:56:36.043+0000	[#############...........]  mydb.users  3.65MB/6.37MB  (57.4%)
2022-03-02T22:56:39.043+0000	[#############...........]  mydb.users  3.65MB/6.37MB  (57.4%)
2022-03-02T22:56:42.043+0000	[#############...........]  mydb.users  3.65MB/6.37MB  (57.4%)
2022-03-02T22:56:45.043+0000	[#############...........]  mydb.users  3.65MB/6.37MB  (57.4%)
2022-03-02T22:56:48.043+0000	[#############...........]  mydb.users  3.65MB/6.37MB  (57.4%)
2022-03-02T22:56:51.043+0000	[#############...........]  mydb.users  3.65MB/6.37MB  (57.4%)
2022-03-02T22:56:54.043+0000	[#############...........]  mydb.users  3.65MB/6.37MB  (57.4%)

The same behavior when I do a mongorestore from a restore container specifying all mongo pods like so mongorestore --db=mydb --collection=users data/mydb/users.bson --host mongo-0.mongo,mongo-1.mongo,mongo-2.mongo --port 27017

Is there anything else I could try?

-- veste
kubernetes
mongodb
mongoimport
mongorestore

1 Answer

3/3/2022

I found my answer here: https://stackoverflow.com/a/41352269/18358598

--writeConcern '{w:0}' works.

-- veste
Source: StackOverflow