I am using kubernetes 1.9.2 created but kubeadm. this kubernetes cluster is running in 4 ec2 nodes.
I have a deployment that requires using cache in every pod. in order to accomlish that we used session affinity from ClusterIP.
since I was ELB in front of my Kubernetes cluster I wonder how the session affinity is behaving.
the natural behavior would be that for every client IP a different will get the requests but given the traffic is transferred via ELB , whoch IP does the session affinity recognizes , the ELB IP or the actual Client IP?
when I check the traffic to the pods I see that 102 pods get all the requests and the 2 other pods are just waiting.
many thanks for any help.
SessionAffinity
recognizes Client IP and ELB should pass the Client IP.
I think you should work with HTTP Headers and Classic Load Balancers and setup X-Forwarded-For: client-ip-address
Also, this seems to be a know issue enabling Session affinity goes to a single pod only #3056.
It was reported for 0.18.0
and 0.19.0
version of NGINX Ingress controller.
Issue was closed and commented that is was fixed in version 0.21.0
, but in December initial author said it still doesn't work for him.