I am deploying a mongodb statefulset on kubernetes cluster (GKE).
Every time when I first deploy the statefulset I have to run the following command manually on mongo statefulset.
My steps:
deploy the yaml file
enter to mongo pod by "kubectl exec -it mongo-0 /bin/bash"
run "mongo"
run
rs.initiate( { _id : "rs0", members: { _id: 0, host: "mongo-0.mongo:27017" }, { _id: 1, host: "mongo-1.mongo:27017" } })
to initiate
Can I put this command in mongo-statefulset.yml file or another ways to handle it automatically?
mongo-statefulset.yml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
role: mongo
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:4.0
command:
- mongod
- "--replSet"
- rs0
- "--bind_ip"
- 0.0.0.0
ports:
- containerPort: 27017
Also another critical problem:
Sometime I don't know why all of my pods inside my cluster would suddenly be restarted, everything is normal from being restarted and keep running except this mongo stateful set. The replica set status of mongo became "OTHER" instead of "Primary/Secondary". Therefore, the mongo service cannot be used until I type the following command to recover it:
rsconf = rs.conf()
rsconf.members = [
{ _id: 0, host: "mongo-0.mongo:27017", priority: 1 },
{ _id: 1, host: "mongo-1.mongo:27017", priority: 0.5 }
]
rs.reconfig(rsconf, {force: true})
How to recover it automatically?
Two options here:
Use Kubernetes Lifecyle Hooks. The PostStart
hook would lend itself to your usecase. However, make sure to build in a check that validates whether your mongo has started, before running the configuration call. The hook is also executed when K8s decides to restart a Pod.
Use Helm Chart Hooks. This adds the option to differentiate between post-install and post-upgrade. However, these hooks will only be fired after Helm installation of upgrading of the chart and not when K8s restarts the Pod.