How to push events from Kafka via WebSockets in a Kubernetes cluster

1/21/2021

I am running a serverless application, currently on AWS using Lambda, API Gateway and self-deployed Kafka, that pushes events from Kafka to WebSocket connections (via API GW persistent connections) that are kept track of using MongoDB. So far this works well. Here's roughly how it looks:

WSClient <- API GW <- Lambda <- Kafka
                         |
                      MongoDB

However, I am interested to migrate from the lambdas to K8s and I am wondering, how might that work. As I understand, one issue is to make the WebSocket clients sticky through the ingress, I've found plenty of examples for that and they would solve problem 1 (which is to always hit the same server). However, I do have a problem 2, that would be that there also needs to be a Kafka consumer service, that needs to push messages out to the WebSockets (in this case to exactly 1 client per message), so that Kafka consumer would need to talk to exactly the service that holds the connection to that WS client. So I'm thinking it must be looking like this:

WS Client <- Ingress <-+- WS Server pod   ...
                       |  ...                  <-???- Kafka Consumer Service <- Kafka
                       `- WS Server pod.  ...

In AWS, problem 2 is solved by making the connection Id persistent, then look it up when a message arrives and then send the message to the API GW which keeps the connection open under the persistent Id. So far this works.

What would be potential scenarios / building blocks to achieve the same solutions in K8s?

-- Andr&#233; Pareis
apache-kafka
kubernetes
websocket

0 Answers