MongoDB replica set - Secondary repeatedly goes down

4/2/2019

I'm trying to set up a MongoDB replica set on my Kubernetes cluster but the Secondary member keeps restarting after a few seconds.

Here's a couple of things but might be useful to know:

  • The Mongo server (and client) version is 4.0.6
  • I'm using the official helm mongodb-replicaset chart to set up the replica set and the only custom setting I'm using is enableMajorityReadConcern: false
  • Oplog size is configured to ~1228MB (only 4.7 used)
  • It happens with both a Primary-Secondary arch and with a PSA architecture where the Arbiter dies repeatedly like the Secondary member whilst the Primary is always up and running
  • This happens both on my minikube and on a staging cluster on GCP with plenty of free resources (I'm deploying this with no resources limits, see right below for cluster status)

Staging cluster status (4 nodes):

NAME                               CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
gke-staging-pool-1-********-****   551m         28%    3423Mi          60%       
gke-staging-pool-1-********-****   613m         31%    3752Mi          66%       
gke-staging-pool-1-********-****   960m         49%    2781Mi          49%       
gke-staging-pool-1-********-****   602m         31%    3590Mi          63%       

At the moment since the Primary seems to be able to stay up and running I managed to keep the cluster live by removing the votes to all members but the primary. This way Mongo doesn't relinquish the primary for not being able to see a majority of the set and my app can still do writes.

If I turn the logLevel to 5 on the Secondary the only error I get is this:

2019-04-02T15:11:42.233+0000 D EXECUTOR [replication-0] Executing a task on behalf of pool replication
2019-04-02T15:11:42.233+0000 D EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-04-02T15:12:11.980+0000
2019-04-02T15:11:42.233+0000 D NETWORK  [RS] Timer received error: CallbackCanceled: Callback was canceled
2019-04-02T15:11:42.235+0000 D NETWORK  [RS] Decompressing message with snappy
2019-04-02T15:11:42.235+0000 D ASIO     [RS] Request 114334 finished with response: { cursor: { nextBatch: [], id: 46974224885, ns: "local.oplog.rs" }, ok: 1.0, operationTime: Timestamp(1554217899, 1), $replData: { term: 11536, lastOpCommitted: { ts: Timestamp(1554217899, 1), t: 11536 }, lastOpVisible: { ts: Timestamp(1554217899, 1), t: 11536 }, configVersion: 666752, replicaSetId: ObjectId('5c8a607380091703c787b3ff'), primaryIndex: 0, syncSourceIndex: -1 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1554217899, 1), t: 11536 }, lastOpApplied: { ts: Timestamp(1554217899, 1), t: 11536 }, rbid: 1, primaryIndex: 0, syncSourceIndex: -1 }, $clusterTime: { clusterTime: Timestamp(1554217899, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }
2019-04-02T15:11:42.235+0000 D EXECUTOR [RS] Received remote response: RemoteResponse --  cmd:{ cursor: { nextBatch: [], id: 46974224885, ns: "local.oplog.rs" }, ok: 1.0, operationTime: Timestamp(1554217899, 1), $replData: { term: 11536, lastOpCommitted: { ts: Timestamp(1554217899, 1), t: 11536 }, lastOpVisible: { ts: Timestamp(1554217899, 1), t: 11536 }, configVersion: 666752, replicaSetId: ObjectId('5c8a607380091703c787b3ff'), primaryIndex: 0, syncSourceIndex: -1 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1554217899, 1), t: 11536 }, lastOpApplied: { ts: Timestamp(1554217899, 1), t: 11536 }, rbid: 1, primaryIndex: 0, syncSourceIndex: -1 }, $clusterTime: { clusterTime: Timestamp(1554217899, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }
2019-04-02T15:11:42.235+0000 D EXECUTOR [replication-5] Executing a task on behalf of pool replication
2019-04-02T15:11:42.235+0000 D REPL     [replication-5] oplog fetcher read 0 operations from remote oplog
2019-04-02T15:11:42.235+0000 D EXECUTOR [replication-5] Scheduling remote command request: RemoteCommand 114336 -- target:foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017 db:local expDate:2019-04-02T15:11:47.285+0000 cmd:{ getMore: 46974224885, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 50, term: 11536, lastKnownCommittedOpTime: { ts: Timestamp(1554217899, 1), t: 11536 } }
2019-04-02T15:11:42.235+0000 D ASIO     [replication-5] startCommand: RemoteCommand 114336 -- target:foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017 db:local expDate:2019-04-02T15:11:47.285+0000 cmd:{ getMore: 46974224885, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 50, term: 11536, lastKnownCommittedOpTime: { ts: Timestamp(1554217899, 1), t: 11536 } }
2019-04-02T15:11:42.235+0000 D EXECUTOR [replication-5] Not reaping because the earliest retirement date is 2019-04-02T15:12:11.980+0000
2019-04-02T15:11:42.235+0000 D NETWORK  [RS] Timer received error: CallbackCanceled: Callback was canceled
2019-04-02T15:11:42.235+0000 D NETWORK  [RS] Compressing message with snappy
2019-04-02T15:11:42.235+0000 D NETWORK  [RS] Timer received error: CallbackCanceled: Callback was canceled
2019-04-02T15:11:42.235+0000 D NETWORK  [RS] Timer received error: CallbackCanceled: Callback was canceled
2019-04-02T15:11:42.235+0000 D NETWORK  [RS] Timer received error: CallbackCanceled: Callback was canceled

Given the network error I verified if all members could connect to each other and they can (it's explicitly showed in the logs of all three members).

ADDITIONAL INFO:

foodchain_rs:PRIMARY> rs.status()
{
    "set" : "foodchain_rs",
    "date" : ISODate("2019-04-02T15:35:02.640Z"),
    "myState" : 1,
    "term" : NumberLong(11536),
    "syncingTo" : "",
    "syncSourceHost" : "",
    "syncSourceId" : -1,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1554219299, 1),
            "t" : NumberLong(11536)
        },
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1554219299, 1),
            "t" : NumberLong(11536)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1554219299, 1),
            "t" : NumberLong(11536)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1554219299, 1),
            "t" : NumberLong(11536)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 4376,
            "optime" : {
                "ts" : Timestamp(1554219299, 1),
                "t" : NumberLong(11536)
            },
            "optimeDate" : ISODate("2019-04-02T15:34:59Z"),
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "electionTime" : Timestamp(1554214927, 1),
            "electionDate" : ISODate("2019-04-02T14:22:07Z"),
            "configVersion" : 666752,
            "self" : true,
            "lastHeartbeatMessage" : ""
        },
        {
            "_id" : 1,
            "name" : "foodchain-backend-mongodb-replicaset-1.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 10,
            "optime" : {
                "ts" : Timestamp(1554219299, 1),
                "t" : NumberLong(11536)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1554219299, 1),
                "t" : NumberLong(11536)
            },
            "optimeDate" : ISODate("2019-04-02T15:34:59Z"),
            "optimeDurableDate" : ISODate("2019-04-02T15:34:59Z"),
            "lastHeartbeat" : ISODate("2019-04-02T15:35:01.747Z"),
            "lastHeartbeatRecv" : ISODate("2019-04-02T15:35:01.456Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017",
            "syncSourceHost" : "foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017",
            "syncSourceId" : 0,
            "infoMessage" : "",
            "configVersion" : 666752
        }
    ],
    "ok" : 1,
    "operationTime" : Timestamp(1554219299, 1),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1554219299, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}
foodchain_rs:PRIMARY> rs.printReplicationInfo()
configured oplog size:   1228.8701171875MB
log length start to end: 1646798secs (457.44hrs)
oplog first event time:  Thu Mar 14 2019 14:08:51 GMT+0000 (UTC)
oplog last event time:   Tue Apr 02 2019 15:35:29 GMT+0000 (UTC)
now:                     Tue Apr 02 2019 15:35:34 GMT+0000 (UTC)
foodchain_rs:PRIMARY> db.getReplicationInfo()
{
    "logSizeMB" : 1228.8701171875,
    "usedMB" : 4.7,
    "timeDiff" : 1646838,
    "timeDiffHours" : 457.46,
    "tFirst" : "Thu Mar 14 2019 14:08:51 GMT+0000 (UTC)",
    "tLast" : "Tue Apr 02 2019 15:36:09 GMT+0000 (UTC)",
    "now" : "Tue Apr 02 2019 15:36:11 GMT+0000 (UTC)"
}
foodchain_rs:PRIMARY> rs.conf()
{
    "_id" : "foodchain_rs",
    "version" : 666752,
    "protocolVersion" : NumberLong(1),
    "writeConcernMajorityJournalDefault" : true,
    "members" : [
        {
            "_id" : 0,
            "host" : "foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 1,
            "host" : "foodchain-backend-mongodb-replicaset-1.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 0,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 0
        }
    ],
    "settings" : {
        "chainingAllowed" : true,
        "heartbeatIntervalMillis" : 2000,
        "heartbeatTimeoutSecs" : 10,
        "electionTimeoutMillis" : 100,
        "catchUpTimeoutMillis" : -1,
        "catchUpTakeoverDelayMillis" : 30000,
        "getLastErrorModes" : {

        },
        "getLastErrorDefaults" : {
            "w" : 1,
            "wtimeout" : 0
        },
        "replicaSetId" : ObjectId("5c8a607380091703c787b3ff")
    }
}
-- Francesco Casula
kubernetes
kubernetes-helm
mongodb
replicaset

1 Answer

4/3/2019

The issue was a too short electionTimeoutMillis setting.

Lower values result in faster failover, but increased sensitivity to primary node or network slowness or spottiness.

In my case it was set to 100ms and that wasn't enough time for my Secondary to find the Primary member so it was unable to sync and thus unavailable.

I think it's also worth noting that the process was not being killed. The mongod PID was always 1 and the uptime showed in top did not coincide with the uptime showed in the rs.status() mongo shell.

What I was doing was monitoring the Secondary uptime via the mongo shell like this:

watch -n 1.0 "kubectl -n foodchain exec -it foodchain-backend-mongodb-replicaset-0 -- mongo --eval='rs.status().members.map(m => m.uptime)'"

With that command I could see that the Secondary uptime was never longer than 10s so I assumed it was restarting itself or being OOM killed or something, instead I think it was trying to fire an election but didn't have the votes to do so and went silent on restarting. In fact what I think it really confused me was the lack of information in that regard despite having set the logLevel to 5.

-- Francesco Casula
Source: StackOverflow