Kubernetes Configurations
Kuberenetes StatefulSet(replicas=2) for the live-nodes:
live-node1
( paired with backup-node1
for HA )
live-node2
( paired with backup-node2
for HA )
Kubernetes Service for the live-nodes:
live-node
Kuberenetes StatefulSet(replicas=2) for the backup-nodes:
backup-node1
backup-node2
Kubernetes Service for the backup-nodes:
backup-node
Note: Clients(Publisher/Consumer) always connect to the cluster via the K8s service - live-node
Scenario
client1
is connected to live-node1
live-node1
goes downbackup-node1
takes overclient1
will try to reconnect via the K8s service - live-node
live-node1
( if it is back up ) OR ends-up connecting to live-node2
My Understanding
live-node1
will connect to the live-node2
live-node2
as no consumer on the backup-node1
live-node2
Please elaborate on this behavior and correct me if I am wrong.
Strictly speaking the message-load-balancing
type configured on your cluster-connection
is completely unrelated to how backups works. The message-load-balancing
type, as the name suggests, is related to how messages are load-balanced around a cluster. How the backup behaves is determined by the ha-policy
you have configured.
The whole point of having a backup is that when the live node fails all the clients connected to the live node will fail-over to the backup node. Furthermore, the backup node will have all the same messages that the live node had (either via replication or shared storage). Therefore, your expectation that all the clients connected to live-node1
will connect to live-node2
when live-node1
fails is misguided.
That said, if clients do actually connect to live-node2
instead of backup-node1
then the message-load-balancing
type would need to be ON_DEMAND
if you wanted messages to eventually be redistributed from the backup-node1
to live-node2
. Obviously the redistribution-delay
would also need to be greater than 0.