I was looking for how to use cookie affinity in GKE. and I successfully implemented it (thanks to this question: Problems configuring Ingress with cookie affinity) and now I can see that I am received GCLB Cookie, but for some reason, requests are not coming back to the same pod replica
I've created a YAML with the following:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-bsc-deployment
spec:
selector:
matchLabels:
purpose: bsc-config-demo
replicas: 3
template:
metadata:
labels:
purpose: bsc-config-demo
spec:
containers:
- name: hello-app-container
image: gcr.io/google-samples/hello-app:1.0
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: my-bsc-backendconfig
spec:
timeoutSec: 40
connectionDraining:
drainingTimeoutSec: 60
sessionAffinity:
affinityType: "GENERATED_COOKIE"
affinityCookieTtlSec: 50
---
apiVersion: v1
kind: Service
metadata:
name: my-bsc-service
labels:
purpose: bsc-config-demo
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"my-bsc-backendconfig"}}'
spec:
type: NodePort
selector:
purpose: bsc-config-demo
ports:
- port: 80
protocol: TCP
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-bsc-ingress
spec:
backend:
serviceName: my-bsc-service
servicePort: 80
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-bsc-service
servicePort: 80
---
What might be causing such an issue?
I have to confront the same problem and the last layer of balancing the traffic is kubeproxy
, this proxy doesn't support session affinity at all.
To solve the problem you have to use a different ingress controller to replace kubeproxy
with a proxy service that supports session affinity, in my case I use Nginx. You have some good examples of how to implement it on the GitHub repository here, basic usage here and also you can use annotations to configure the Nginx depending on each ingress needs, the complete list of annotations here.
The reason is this, from GCP HTTP(S) Load Balancers documentation:
You must create a firewall rule that allows traffic from 130.211.0.0/22 and 35.191.0.0/16 to reach your instances. These are IP address ranges that the load balancer uses to connect to backend instances.
Your users do not connect to the backends directly, but through these "proxies", so the session affinity happens, but not as you want. In fact, if you are using GCLB, you should avoid session affinity.
The affinity is working, just not the way you would expect. The affinity is currently happening between the GCP LB and it's backend (the node, not the pod). Once traffic reaches your node, the service then forwards the request to a pod. Since the service does not have affinity, it chooses a pod essentially at random. There are two ways to make this work.
Use container native load balancing using network endpoint groups. This will result in the pods acting as backends to the Load Balancer so the cookie affinity should stick.
Leave the Ingress as is, configure your NodePort service with spec.sessionAffinity
. On GKE, only supports clientIP
as the value for this field. The next step is to ensure that the client IP is actually used correctly, add the spec.externalTrafficPolicy: Local
field.
Alternately, you can use Nginx ingress which does not cause 2 layer load balancing like the GCP ingress does so tha affinity is more direct but is not supported by Google support.