I have a kubernetes instalation with master and 1 node.
It is configured and everything is working very good.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mantis-gfs 1/1 Running 1 22h
mongodb-gfs 1/1 Running 0 14h
I exposed the pod mongodb-gfs:
$ kubectl expose pod mongodb-gfs --port=27017 --external-ip=10.9.8.100 --name=mongodb --labels="env=development"
The extrnal IP 10.9.8.100 is the IP of the kubernetes master node
The service was created successfully.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
glusterfs-cluster ClusterIP 10.111.96.254 <none> 1/TCP 23d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29d
mongodb ClusterIP 10.100.149.90 10.9.8.100 27017/TCP 1m
Now i am able to access the mongo using:
mongo 10.9.8.100:27017
And here is the problem. It works some time, but some time not. I connect once and i get the shell, i connect second time and get:
$ mongo 10.9.8.100:27017
MongoDB shell version v3.4.17
connecting to: mongodb://10.9.8.100:27017/test
2018-11-01T09:27:23.524+0100 W NETWORK [thread1] Failed to connect to 10.9.8.100:27017, in(checking socket for error after poll), reason: Connection refused
2018-11-01T09:27:23.524+0100 E QUERY [thread1] Error: couldn't connect to server 10.9.8.100:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:240:13
@(connect):1:6
exception: connect failed
Then i try again and it works, try again it works, try again it not works...
Any clues what may cause the problem?
I found the problem and the solution. The problem was, the pod definition. For both pods: mongodb-gfs and mantis-gfs i have the same label settings. Then i exposed services with the same label="env=development". In this case the traffic that i expected to go always to one pod was "loadbalanced" to one or other pod (they have the same label) of different types.
Changing the label in the mongodb-gfs pod definition solved the problem with connection issues.