Is there a way to enable stickiness between the client and target pods when using AWS Global Accelerator and NLB?

1/19/2022

On an AWS EKS cluster, I have deployed a stateful application. In order to load balance my application across different pods and availability zones, I have added an HAProxy Ingress Controller which uses an external AWS NLB.

I have one NLB in this cluster which points to the HAProxy Service. On top of the NLB I have created a global accelerator and I've set the NLB as its target endpoint.

My requirement is to ensure that once a user connects to the DNS of the Global Accelerator, they will always be directed to the same endpoint server, i.e the same HAProxy Pod.

The connection workflow goes like this: Client User -> Global Accelerator -> NLB -> HAProxy pod.

While searching for ways to make this work, here's what I've done:

  • To ensure stickiness between the NLB and its target (HAProxy pods) I have enabled stickiness on the NLB targets.
  • Now, when it comes to the stickiness between the Global Accelerator and the NLB, it looks like the right thing to do is to set the Global Accelerator's Client Affinity attribute to "Source IP". According to the documentation, with this setting the Global Accelerator honors client affinity by routing all connections with the same source IP address to the same endpoint group.

My expectations were that with these attributes enabled, the user will always get connected to the same NLB which then connects to the same HAProxy pod.

After testing, when I connected to my application via the NLB DNS, the goal was achieved and I get a sticky connection. However, when I connect via the Global Accelerator, my session keeps crashing.

Any ideas of why that might be? Or are there any suggestions of a different way to work with this?

-- YanaT
amazon-eks
aws-global-accelerator
aws-nlb
kubernetes

0 Answers