how sockets or communication channels are maintained in distibuted system

1/14/2020

I am new to distributed systems, and came to this problem once needed to deploy a gRPC service to kubernetes (GKE). As far as I know, when a client initiate an rpc, it creates a long lasting http2 connection and further calls are multiplexed on it. I like to send/push notifications or similar messages to the client through this connection. If I deploy to multiple pod, then the connections are spread across them, and not sure what is the best way to locate the instance where the channel is registered to the client. A possible solution could be, as soon as user initiate a connection, keep a reference of clientId and pod ip (or some identification) in a centralized service and other pods lookup the pod and forward the message to it. Is something like is advisable or is there an existing solution for this? I am unfamiliar with this space and any suggestion is highly appreciated.

Edit: (response to @mebius99)

While looking at deploying option, I stumbled upon GKE, and other cloud deployment options were limited because of my use of gRPC/http2. Thanks for mentioning service discovery , and that or service mesh might be an option. With gRPC, client maintains a long lived connection to a single pod. So, I want every pod to be able to query, based on unique clientId (clients can do an initial register rpc call), which pod is it connected, so can make use of this connection and also a way pods to forward the message between them. So, something like when I get a registration call from client, I update the central registry about the client and pod ip, then look it up from any pod and forward package to it so it further forward to client through the existing streaming connection. You guiding me to the right direction, please let me know above is possible in container environment.

thank you.

-- bsr
distributed-computing
distributed-system
google-kubernetes-engine
grpc
kubernetes

3 Answers

1/15/2020
-- bells17
Source: StackOverflow

1/15/2020

I'd suggest to start from the Kubernetes Service concept and Service discovery. The External HTTP(S) Load Balancing should fit your needs.

In case you need something more sophisticated, Envoy proxy + Network Load Balancing could be a solution, as is mentioned here.

-- mebius99
Source: StackOverflow

1/14/2020

It sounds like you want to implement some kind of Pub-Sub system.

You must do some back-of-envelop calculation of the scale, such as how many clients, how many messages per second first.

Then you can choose whether to implement yourself or pick an off-the-shelf system, such as https://doc.akka.io/docs/alpakka/current/google-cloud-pub-sub-grpc.html

-- Tang
Source: StackOverflow