I have a cluster of database nodes hosted in VMs or Bare Metal and I'd like to create additional database nodes (hosted in Kubernetes Pods) and have them join the existing cluster (ones hosted in VMs or bare metal).
In order to have them join the cluster, each database must be able to resolve the other via distinct IP and port. Within the Kubernetes network environment, there is no issue with this and no issue between the existing VM-hosted DBs. The sticking point is that I can't seem to see a way for the VM-hosted DBs to individually route to each POD-hosted DB. Is there a Kubernetes configuration that will allow me to have each pod/DB individually routable on specific ports while sharing the same NIC for the host running the cluster? Do I need to front each POD with it's own Service?
Here is the sort of configuration I'm trying to achieve with conceptual IP address spaces.
The approach I take personaly for a similar case is to actualy make it possible for nodes in the non-kubernetes environment to be able to talk to the pods them selves. Depending on your network configuration this might be quite easy to achieve.
In my case I simply have 2 additional elements running on VMs that need to access my k8s internals : - flannel : this actually ties my VMs to the same flannel network as k8s pods operate in - kube-proxy : translates service ips to pod ips using iptables (in cases where I need to access by service IP)
You could avoid setting this up on VMs or their hosts if you can solve this on a gateway level (ie. have flannel/proxy on your network gate and augment it with some SNAT rules).
Having NodePort/LB service per in-k8s db might work if your DB sticks to the IPs you give (not only use for discovery bootstraping where later on the IPs are replaced with actual IPs of DBs - iirc mongo usually does something like that)