Best way for inter-cluster communication between microservices on Kubernetes?

4/8/2020

I am new to microservices and want to understand what is the best way I can implement below behaviour in microservices deployed on Kubernetes :

There are 2 different K8s clusters. Microservice B is deployed on both the clusters.

Now if a Microservice A calls Microservice B and B’s pods are not available in cluster 1, then the call should go to B of cluster 2.

I could have imagined to implement this functionality by using Netflix OSS but here I am not using it.

Also, keeping aside the inter-cluster communication aside for a second, how should I communicate between microservices ?

One way that I know is to create Kubernetes Service of type NodePort for each microservice and use the IP and the nodePort in the calling microservice.

Question : What if someone deletes the target microservice's K8s Service? A new nodePort will be randomly assigned by K8s while recreating the K8s service and then again I will have to go back to my calling microservice and change the nodePort of the target microservice. How can I decouple from the nodePort?

I researched about kubedns but seems like it only works within a cluster.

I have very limited knowledge about Istio and Kubernetes Ingress. Does any one of these provide something what I am looking for ?

Sorry for a long question. Any sort of guidance will be very helpful.

-- ABhinav
java
kubernetes
kubernetes-ingress
microservices
spring-boot

3 Answers

4/15/2020

Your design is pretty close to Istio Multicluster example.

By following the steps in the Istio multicluster lab you'll get two clusters with one Istio control plane that balance the traffic between two ClusterIP Services located in two independent Kubernetes clusters.

The lab's configuration watches the traffic load, but rewriting the Controller Pod code you can configure it to switch the traffic to the Second Cluster if the Cluster One's Service has no endpoints ( all pods of the certain type are not in Ready state).

It's just an example, you can change istiowatcher.go code to implement any logic you want.


There is more advanced solution using Admiral as an Istio Multicluster management automation tool.

Admiral provides automatic configuration for an Istio mesh spanning multiple clusters to work as a single mesh based on a unique service identifier that associates workloads running on multiple clusters to a service. It also provides automatic provisioning and syncing of Istio configuration across clusters. This removes the burden on developers and mesh operators, which helps scale beyond a few clusters.

This solution solves these key requirements for modern Kubernetes infrastructure:

  • Creation of service DNS entries decoupled from the namespace, as described in Features of multi-mesh deployments.
  • Service discovery across many clusters.
  • Supporting active-active & HA/DR deployments. We also had to support these crucial resiliency patterns with services being deployed in globally unique namespaces across discrete clusters.

This solution may become very useful in a full scale.

-- VAS
Source: StackOverflow

4/9/2020

You can expose you application using services, there are some kind of services you can use:

  • ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.

  • NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.

  • LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.

  • ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record

For internal communication you an use service type ClusterIP, and you could configure the service dns for your applications instead an IP. I.e.: a service called my-app-1 could be reach internnaly using the dns http://my-app-1 or with fqdn http://my-app-1.<namespace>.svc.cluster.local.

For external communication, you can use NodePort or LoadBalancer.

NodePort is good when you have few nodes and know the ip of all of them. And yes, by the service docs you can specify a specific port number:

If you want a specific port number, you can specify a value in the nodePort field. The control plane will either allocate you that port or report that the API transaction failed. This means that you need to take care of possible port collisions yourself. You also have to use a valid port number, one that’s inside the range configured for NodePort use.

LoadBalancer give you more flexibility, because you don't need to know all node ips, you just must to know the service IP and port. But LoadBalancer is only supported in cloudproviders, if you wan to implement in bare-metal cluster, I recomend you take a look in MetalLB.

Finnaly, there is another option that is use ingress, in my point of view is the best way to expose HTTP applications externally, because you can create rules by path and host, and it gives you much more flexibility than services. But only HTTP/HTTPS is supported, if you need TCP then go to Services

I'd recommend you take a look in this links to understand in deep how services and ingress works:

Kubernetes Services

Kubernetes Ingress

NGINX Ingress

-- KoopaKiller
Source: StackOverflow

4/9/2020

Use ingress for inter cluster communication and use cluster ip type service for intra cluster communication between two micro services.

-- Arghya Sadhu
Source: StackOverflow