Architect a message application using Websockets on Kubernetes

11/29/2019

I want to architect a message application using Websockets running on Kubernetes and want to know how to solve some problems...

Context

So, say you are building a chat application... Your chat application needs to frequently communicate with the back-end to work (e.g. receive sent messages, send messages etc.), so assuming the back-end will be built using Node & the front-end is built using Electron, I think it would make sense to use Web-sockets in this scenario.

Problem 1 - Load Balancing

Your Web-socket server is suffering from bad performance, so you want to fix that. In this scenario I think it would make sense to make several instances of the Web-socket server & balance the incoming traffic among them equally (load-balancer). For HTTP/HTTPS requests this makes sense, which server instance the request is redirected does not matter as its a "one time" request, but Web-sockets are different, if the client connected to instance 3, it would make sense if the rest of the incoming requests came into instance 3 (as the server might keep client state (like whether or not the client is authenticated))

Problem 2 - Division by Concerns

As the chat application gets bigger & bigger, more & more things need to be handled by the Web-socket servers... So it would make sense to split it into different concerns... (e.g. messaging, user authentication etc.) But assuming client state has to be kept, how can these different concerns know that state? (Shared state among concerns)

Problem 3 - Event Emitting

You implemented an event which fires, for each client, every time a user sends a message. How can this be achieved when there are several instances? (e.g. Client 1 is connected to Web-socket server instance 1, client 1 sends a message... Client 2 is connected to Web-socket server instance 2 & the event needs to be fired for the client...)

-- VimHax
architecture
kubernetes
load-balancing
node.js
websocket

1 Answer

11/29/2019

Websockets: One request - long running connection

Your problem about Load balancing will be handled. Clients will be load balanced to different instances. When using Websockets, clients do one request to connect, they then keep that TCP connection to the backend and send multiple messages on the same connection.

Separation of concerns

more things need to be handled by the Web-socket servers... So it would make sense to split it into different concerns... (e.g. messaging, user authentication etc.)

Yes, you should do separation of concerns. E.g. you could have one authentication service do a OpenID Connect authentication, and the user can use the access token when connecting with e.g. Websockets or send other API requests.

A web client usually allow up to two Websocket connections to the same domain, so it is better to only have one Websocket service. But you could you some kind of message broker, e.g. MQTT over Websocket and route messages to different services.

Emitting messages

You implemented an event which fires, for each client, every time a user sends a message.

If you use a message broker as I described above, all clients can subscribe to channels and when you publish a message, it will be routed to all subscribers.

-- Jonas
Source: StackOverflow