I have a Kubernetes cluster in which there are some MySQL databases.
I want to have a replication slave for each database in a different Kubernetes cluster in a different datacenter.
I'm using Calico as CNI plugin.
To make the replication process work, the slaves must be able to connect to the port 3306 of the master servers. And I would prefer to keep these connections the most isolated as possible.
I'm wondering about the best approach to manage this.
One of the ways to implement your idea is to use a new tool called Submariner.
Submariner enables direct networking between pods in different Kubernetes clusters on prem or in the cloud.
This new solution overcomes barriers to connectivity between Kubernetes clusters and allows for a host of new multi-cluster implementations, such as database replication within Kubernetes across geographic regions and deploying service mesh across clusters.
Key features of Submariner include:
Compatibility and connectivity with existing clusters: Users can deploy Submariner into existing Kubernetes clusters, with the addition of Layer-3 network connectivity between pods in different clusters.
Secure paths: Encrypted network connectivity is implemented using IPSec tunnels. Various connectivity mechanisms: While IPsec is the default connectivity mechanism out of the box, Rancher will enable different inter-connectivity plugins in the near future.
Centralized broker : Users can register and maintain a set of healthy gateway nodes.
Flexible service discovery: Submariner provides service discovery across multiple Kubernetes clusters.
CNI compatibility: Works with popular CNI drivers such as Flannel and Calico.
Prerequisites to use it:
At least 3 Kubernetes clusters, one of which is designated to serve as the central broker that is accessible by all of your connected clusters; this can be one of your connected clusters, but comes with the limitation that the cluster is required to be up in order to facilitate inter-connectivity/negotiation
Different cluster/service CIDR's (as well as different Kubernetes DNS suffixes) between clusters. This is to prevent traffic selector/policy/routing conflicts.
Direct IP connectivity between instances through the internet (or on the same network if not running Submariner over the internet). Submariner supports 1:1 NAT setups, but has a few caveats/provider specific configuration instructions in this configuration.
Knowledge of each cluster's network configuration
Helm version that supports crd-install hook (v2.12.1+)
You can find more info with installation steps on submariner github. Also, you may find rancher submariner multi-cluster article interesting and useful.
Good luck.