Unable to access external mongo server from a pod but able to connect from EC2 instance

10/28/2018

I am trying to connect to a MongoDB instance which is running on an external server from a pod running in a k8s cluster. I do have a VPC peering setup between two VPCs and I am perfectly able to connect to MongoDB server from nodes but when I try from a running pod, it fails. On trying traceroute, I think the private IP is not being resolved outside of the pod network.

Is there anything else which needs to be configured on pod networking side?

-- Shantanu Deshpande
amazon-web-services
kubernetes

3 Answers

10/28/2018

This is working fine. I was testing the connectivity with telnet from a pod and since telnet was not returning anything after the successful connection, it seemed that there was some network issue. After testing this with a simplehttp server and monitoring connections, I saw that it all worked fine.

-- Shantanu Deshpande
Source: StackOverflow

10/29/2018

The IP Addresses of podCidr is overlapping with VPC Cidr in which your mongo server is residing and hence Kube Router is preferring the internal route table 1st which is working as designed.

You wither need to reconfigure your VPC Network with a different network or the Kube Network.

-- aarvee
Source: StackOverflow

10/28/2018

Taking a wild guess here, I believe your podCidr is conflicting with one of the Cidrs on your VPC. For example:

192.168.0.0/16 <podCidr) ->  192.168.1.0/24 (VPC cidr)
# Pod is thinking it needs to talk to another pod in the cluster
# instead of a server

You can see your podCidr with this command (clusterCIDR field):

$ kubectl -n kube-system get cm kube-proxy -o=yaml

Another aspect where things could be misconfigured could be your overlay network, where the pods are not getting pod IP address.

-- Rico
Source: StackOverflow