I am now trying to implement the new system. My system will be divided into 2 clusters. First is for computing job. It will be heavily change by CI/CD very frequent. Then to prevent it from my juniors's accident and also save cost. Because on computer node does not need to use 100GB
like database
Now. I am setting up my mongo-replicaset
using helm
. My configuration works fine. Here is my terminal log during the installation.
Install with 100GB
per each node. They are 3 nodes.
$ gcloud container clusters create elmo --disk-size=100GB --enable-cloud-logging --enable-cloud-monitoring
I have changed username and password in the values.yaml
mongodbUsername: myuser
mongodbPassword: mypassword
However, when I jump in to the pod. It does not require me to do any authentication. I can execute show dbs
$ kubectl exec -it ipman-mongodb-replicaset-0 mongo
MongoDB shell version v4.0.6
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("966e85fd-8857-46ac-a2a4-a8b560e37104") }
MongoDB server version: 4.0.6
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
2019-03-20T12:15:51.266+0000 I STORAGE [main] In File::open(), ::open for '//.mongorc.js' failed with Unknown error
Server has startup warnings:
2019-03-20T11:36:03.768+0000 I STORAGE [initandlisten]
2019-03-20T11:36:03.768+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-03-20T11:36:03.768+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-03-20T11:36:05.082+0000 I CONTROL [initandlisten]
2019-03-20T11:36:05.082+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-03-20T11:36:05.082+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2019-03-20T11:36:05.083+0000 I CONTROL [initandlisten]
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
rs0:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
I can see 2 services running mongodb-replicaset
$ kubectl describe svc ipman-mongodb-replicaset
Name: ipman-mongodb-replicaset
Namespace: default
Labels: app=mongodb-replicaset
chart=mongodb-replicaset-3.9.2
heritage=Tiller
release=ipman
Annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: true
Selector: app=mongodb-replicaset,release=ipman
Type: ClusterIP
IP: None
Port: mongodb 27017/TCP
TargetPort: 27017/TCP
Endpoints: 10.60.1.5:27017,10.60.2.7:27017,10.60.2.8:27017
Session Affinity: None
Events: <none>
$ kubectl describe svc ipman-mongodb-replicaset-client
Name: ipman-mongodb-replicaset-client
Namespace: default
Labels: app=mongodb-replicaset
chart=mongodb-replicaset-3.9.2
heritage=Tiller
release=ipman
Annotations: <none>
Selector: app=mongodb-replicaset,release=ipman
Type: ClusterIP
IP: None
Port: mongodb 27017/TCP
TargetPort: 27017/TCP
Endpoints: 10.60.1.5:27017,10.60.2.7:27017,10.60.2.8:27017
Session Affinity: None
Events: <none>
I have seen here and here. I have 3 IP address. Which one should I use?
I think LoadBalancer
might not fit to my need because it is normally use with backend
service to balance load between nodes. For my case. It is master
to do writing and replica
to do reading.
$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-elmo-default-pool-c5dc6e86-1j8v asia-southeast1-a n1-standard-1 10.148.0.59 35.197.148.201 RUNNING
gke-elmo-default-pool-c5dc6e86-5hs4 asia-southeast1-a n1-standard-1 10.148.0.57 35.198.217.71 RUNNING
gke-elmo-default-pool-c5dc6e86-wh0l asia-southeast1-a n1-standard-1 10.148.0.58 35.197.128.107 RUNNING
Question:
Why my username:password
does not take in to account when do authentication?
How can I expose my mongo
shell and let client comes from internet use my database server by using
mongo -u <user> -p <pass> --host kluster.me.com --port 27017
I have checked with the helm chart
document. I am worry that I am using k8s
in the wrong way. Therefore I decided to ask in here.
I cannot answer about the password issue, but using a separate cluster for your DB might not be the best option. By creating a separate cluster you are forced to expose your sensitive database to the world. This is not ideal.
I recommend you deploy your mongo on your existing cluster. This way you can have your computing workloads connect to your mongo simply by using the service name as the hostname.
If you need bigger drive for your mongo, simply use persistence disk and specify the size when you create your mongo installation using helm.
For example:
helm install mongo-replicaset --name whatever --set persistentVolume.size=100Gi
In your values.yaml
file, you have a section called persistence
when it should be called persistentVolume
.
I recommend that your values.yaml
only contains the values you want to change and not everything.