HTTP Load Balancer ClientIP affinity not working

8/16/2016

I can't seem to get the session affinity behavior in the GCP load balancer to work properly. My test has been as follows:

  • I have a Container Engine cluster with 2 node pools (different zones) with 2 nodes each.
  • I have a deployment which is set to replica: 8, and it's (almost) evenly spread between the 4 nodes.
  • I have a service exposed as follows (ips redacted)

    Name:           svc-foo
    Namespace:      default
    Labels:         app=foo
    Selector:       app=foo
    Type:           NodePort
    IP:         ....
    Port:           <unset> 8080/TCP
    NodePort:       <unset> 31015/TCP
    Endpoints:      ...:8080,...:8080,...:8080 + 5 more...
    Session Affinity:   ClientIP
    No events.
  • I have a load balancer with a backend service that has 2 backends pointed at port 31015. It has a healthcheck which passes and a route to get to that backend service.

  • Finally, I have the Session affinity set to ClientIP on that backend service as well.

After curling a route and checking the logs in stackdriver, I see container.googleapis.com/pod_name: in the metadata of the logs with a bunch of different pod names. In the Kubernetes ui, I also see that all the pods have a little cpu spike, indicating I'm alternating and hitting each one. A weird part is that in GCP, when I look at the monitoring of the backend service, the graph shows me requests per second only to one of the pools (even though the logs and cpu graphs from k8s show the other pool being hit as well).

-- artushin
google-compute-engine
google-kubernetes-engine
kubernetes

0 Answers