Make RabbitMQ durable/persistent queues survive Kubernetes pod restart

6/21/2019

Our application uses RabbitMQ with only a single node. It is run in a single Kubernetes pod.

We use durable/persistent queues, but any time that our cloud instance is brought down and back up, and the RabbitMQ pod is restarted, our existing durable/persistent queues are gone.

At first, I though that it was an issue with the volume that the queues were stored on not being persistent, but that turned out not to be the case.

It appears that the queue data is stored in /var/lib/rabbitmq/mnesia/<user@hostname>. Since the pod's hostname changes each time, it creates a new set of data for the new hostname and loses access to the previously persisted queue. I have many sets of files built up in the mnesia folder, all from previous restarts.

How can I prevent this behavior?

The closest answer that I could find is in this question, but if I'm reading it correctly, this would only work if you have multiple nodes in a cluster simultaneously, sharing queue data. I'm not sure it would work with a single node. Or would it?

-- JoeMjr2
kubernetes
persistence
persistent-storage
rabbitmq

2 Answers

6/22/2019

How can I prevent this behavior?

By using a StatefulSet as is intended for the case where Pods have persistent data that is associated with their "identity." The Helm chart is a fine place to start reading, even if you don't end up using it.

-- mdaniel
Source: StackOverflow

11/26/2019

I ran into this issue myself and the quickest way I found was to specify an environment variable RABBITMQ_NODENAME = "yourapplicationsqueuename" and making sure I only had 1 replica for my pod.

-- Rob G
Source: StackOverflow