I am running 3 mongodb pods and separate service and persistent volume claims for each pod. I want to do the Mongodb replication among the 3 pods. Login into 1st pod and gave the mongo command, then i configured hosts as podname.servicename.namespace.svc.cluster.local:27017 for each pod.
rs.initiate(
{
"_id": "rs0",
"members": [
{
"_id": 0,
"host": "mongo-.mongo.default.svc.cluster.local:27017",
"priority": 10
},
{
"_id": 1,
"host": "mongo-1.mongo.default.svc.cluster.local:27017",
"priority": 9
},
{
"_id": 2,
"host": "mongo-2.mongo.default.svc.cluster.local:27017",
"arbiterOnly": true
}
]
}
)
I am getting the error like this
replSetInitiate quorum check failed because not all proposed set members responded affirmatively: mongo-1.mongo.default.svc.cluster.local:27017 failed with Error connecting to mongo-1.mongo.default.svc.cluster.local:27017 (10.36.0.1:27017) :: caused by :: Connection refused, mongo-2.mongo.default.svc.cluster.local:27017 failed with Error connecting to mongo-2.mongo.default.svc.cluster.local:27017 (10.44.0.3:27017) :: caused by :: Connection refused
Here i have dought on whether cluster-IP or node-IP it takes as host while doing the MongoDB replication in kubernetes cluster.
Could anybody suggest me how to configure the host-name while doing the mongodb replication in kubernates?
You must explicitly bind mongod
to the non-loopback interface since mongo 3.6, according to the fine manual
You can test that theory yourself by exec-ing into mongo-1.mongo.default
and attempting to manually connect to mongo-2.mongo.default
, which I am about 90% certain will fail for you in the same way it fails for mongod
.