I have a pod running on Google Cloud Kubernetes and I have a MongoDB cluster running on Atlas. The issue is quite simple:
If I allow IP from ANYWHERE on Atlas MongoDB, I can connect. If I add the IP of the pod (so not from ANYWHERE anymore), it doesn't work.
I also tried locally and from a docker running locally as well, it works.
I got the IP (YY.YYY.YYY.YY) of my pod using:
MacBook-Pro-de-Emixam23:plop-service emixam23$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
polop-service LoadBalancer XX.XX.X.XXX YY.YYY.YYY.YY ZZZZZ:32633/TCP,ZZZZZ:32712/TCP 172m
kubernetes ClusterIP XX.X.X.X <none> 443/TCP 3h24m
But by the behavior I get.. I feel like this EXTERNAL-IP isn't the IP from where my requests are sent from.
Can anyone explain to me what can be the issue?
The IP exposed to Mongo Atlas should be Internet accessible IP (or called it, public IP).
Normally it should be the net gateway IPs (or proxy server's IPs, if you go with proxy).
One quick way to check the IP by running below command in pods
curl ifconfig.me
If your pod doesn't support this command, you can kubectl exec -ti <pod_name> -- sh
in it and install this command.
Remember: normally the IPs are not only one, there should be 3 or more public facing IPs via net gateway, you need find them all and add to Mongo Atlas whitelists