How can I access external MongoDB server on ec2 instance from an app running inside Kubernetes cluster created with kops?

10/21/2018

I am having a situation where my MongoDB in running on a separate ec2 instance and my app is running inside a kubernetes cluster created by kops. Now I want to access the DB from the app running inside k8s.

For this, I tried VPC peering between k8s VPC and ec2 instance' VPC. I tried setting requester VPC as k8s VPC and acceptor VPC as instance' VPC. After that, I've also added an ingress rule in ec2 instance' security group for allowing access from k8s cluster's security group on port 27017.

But, when I ssh'd into the k8s node and tried with telnet, the connection failed.

Is there anything incorrect in the procedure? Is there any better way to handle this?

CIDR blocks:

  1. K8S VPC - 172.20.0.0/16

  2. MongoDB VPC - 172.16.0.0/16

-- Shantanu Deshpande
amazon-ec2
kops
kubernetes
mongodb

2 Answers

10/21/2018

First , this does not seems to be kubernetes issue.

Make sure you have the proper route from kubernetes to mongodb node and vice versa

Make sure the required ports are open in security groups of VPCs

Allow inbound traffic from kubernetes vpc to monogdb vpc

Allow inbound traffic from mongodb vpc to kubernetes vpc

Make sure the namespace security allows the inbound and bound traffic

-- Ijaz Ahmad Khan
Source: StackOverflow

10/21/2018

What are the CIDR blocks of the two VPCs? They mustn't overlap. In addition, you need to make sure that communication is allowed to travel both ways when modifying the security groups. That is, in addition to modifying your MongoDB VPC to allow inbound traffic from the K8s VPC, you need to make sure the K8s VPC allows inbound traffic from the MongDB VPC.

-- Grant David Bachman
Source: StackOverflow