RabbitMQ double binding -- duplicate output

2/3/2022

System architecture:

I have a RabbitMQ node with unallocated and allocated exchanges and queues. I am spinning up pods (consumers) which on initialization talk to RabbitMQ management microservice telling it that it is ready to start consuming messages.

Scenario:

Message with routing key X comes in and if it is able to be routed by an allocated exchange to a dedicated queue (queue per pod -- direct) then send it off and we are happy. Otherwise we have a new routing key that has to be allocated and so put it through the unallocated exchange (fanout) to all pods and if the pod is "not consuming messages" -- i.e. is available to be allocated then the pod is allocated and a dedicated queue is attached to the pod. Now any future messages with routing key X will just go directly to the allocated pod.

Problem:

If two messages with the same routing key come in within a small time interval (in ms) then they are both processed as "unallocated" and so both are attached to two different pods and so future messages with the same routing key are duplicated amongst the two pods. There simply isn't enough time between processing both of these messages to register the routing key as "allocated".

Working solution (want to get rid of this):

Use a hazelcast lock to lock on the routing key. This puts a heavy strain on the system.

Is there a native way to lock on RabbitMQ routing keys? Is there a topology I can set up (perhaps exchange to exchange) which will fix this issue? Is there anything on the RabbitMQ side I can do? Can we set up unique routing keys (i.e. 1 queue per routing key)?

-- HishamNajem
java
kubernetes
rabbitmq

0 Answers