In my headless service, I configure sessionAffinity so that connections from a particular client are passed to the same Pod each time as described here
Here is the manifest :
apiVersion: v1
kind: Service
metadata:
name: service1
spec:
clusterIP: None
selector:
app: nginx
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 30
I run some nginx pods to test :
$ kubectl create deployment nginx --image=stenote/nginx-hostname
The problem is that when I curl my service, I am redirected to different pods and sessionAffinity seems to be ignored.
$ kubectl run --generator=run-pod/v1 --rm utils -it --image arunvelsriram/utils bash
root@utils:/# for i in $(seq 1 10) ; do curl service1; done
nginx-78d58889-b7fm2
nginx-78d58889-b7fm2
nginx-78d58889-b7fm2
nginx-78d58889-b7fm2
nginx-78d58889-b7fm2
nginx-78d58889-8rpxd
nginx-78d58889-b7fm2
nginx-78d58889-62jlw
nginx-78d58889-8rpxd
nginx-78d58889-62jlw
NB. When I check with
$ kubectl describe svc service1
Name: service1
Namespace: abdelghani
Labels: <none>
Annotations: <none>
Selector: app=nginx
Type: ClusterIP
IP Families: <none>
IP: None
IPs: <none>
Session Affinity: ClientIP
Events: <none>
SessionAffinity
configuration is present.
Note that my service is headless i.e. clusterIP: None
. SessionAffinity seems to work fine with non-headless services. But, I cant find a clear explanation in the documentation. Is this related to the platform not doing any proxying?
Abdelghani
When using headless service (clusterIP: None) you don't use proxy.
From k8s docs:
For headless Services, a cluster IP is not allocated, kube-proxy does not handle these Services, and there is no load balancing or proxying done by the platform for them. How DNS is automatically configured depends on whether the Service has selectors defined
So when using headless service, dns responds with randomized list of ips of all pods associated with given service.
/app # dig service1 +search +short
172.17.0.8
172.17.0.10
172.17.0.9
/app # dig service1 +search +short
172.17.0.9
172.17.0.10
172.17.0.8
/app # dig service1 +search +short
172.17.0.8
172.17.0.10
172.17.0.9
/app # dig service1 +search +short
172.17.0.10
172.17.0.9
172.17.0.8
/app # dig service1 +search +short
172.17.0.9
172.17.0.8
172.17.0.10
and curl just gets one and goes with it.
Since this hapens every request, every time you get different ip from dns, you connect to different pod.