What does MongoDB "ping" command actually do?

11/23/2021

I am running MongoDB as a StatefulSet in Kubernetes.

I am trying to use startup/liveness probes, I noticed some helm charts use the MongoDB "ping" command.

As the documentationsays,

The ping command is a no-op used to test whether a server is responding to commands. This command will return immediately even if the server is write-locked:

What does it mean? When a server is starting or in the midst of initial sync, what will the command return? Many thanks!

-- C C H
kubernetes
mongodb

1 Answer

11/23/2021

Not sure if the ping is a good idea, you don't care about the general state of the server, you care that it can receive connections.

liveness probes have a timeout, so it's possible that in the future when you're starting a new replica the new pod in the stateful set will fail while waiting for the replication to end.

You should use the rs.status() and get the "myState" field.

myState is an integer flag between 0-10. See this for all the possible statuses.

And if, for whatever reason rs.status() command fails, that means that the ping would also fail.

However, a successful ping doesn't mean that the server is ready to receive connections and serve data, which is what you really care about.

probes

startup probe, myState equals to 1 or 2

this means that the startup probe will wait patiently until the server is ready, regardless if it's a primary or replica.

readiness probes, myState equals to 1 or 2

this means that, whenever a replica needs to rollback or is recovering or whatever reason that mongod decides that it's not ready to accept connections or serve data, this will let kubernetes know that this pod is not ready, and will route requests to other pods in the sts.

livenes probe, myState is NOT equals to 6, 8 or 10

That means that, unless the server status is UNKOWN, DOWN or REMOVED, kubernetes will assume that this server is alive.

So, let's test a scenario!

  • sts started, first pod is on STARTUP, myState = 0
  • startup probe waits
  • first MongoDB node is ready, myState = 1
  • startup probe finally passed, now readiness and liveness probes start acting
  • new replica triggered, second pod is on STARTUP, myState = 0
  • new replica successfully joins the set, myState = 5
  • new replica is ready, myState = 2
  • startup probe finally passed, now readiness and liveness probes start acting
  • time for some action
  • a massive operation that altered hundreds of documents was rolledback on the primary
  • second pod is now on ROLLBACK, myState = 9, readiness probe failed, second pod is now NOT READY
  • all connections are now sent to the PRIMARY
  • second pod has finished the rollback
  • second pod is now back as a SECONDARY, myState = 2, liveness probe succeeds and pod is back at the READY state
  • the MongoDB dba messed up and issued a command that removed the secondary from the replicationset, myState = 10
  • liveness probe fails, kubernetes kills the pod
  • sts wants 2 replicas and starts a second pod again ...

all good :)

-- Magus
Source: StackOverflow