Exposed Service and Replica Set Relation in Kubernetes

5/15/2019

I have a question about how kubernetes decides the serving pod when there are several replicas of the pod.

For Instance, let's assume I have a web application running on a k8s cluster as multiple pod replicas and they are exposed by a service.

When a client sends a request it goes to service and kube-proxy. But where and when does kubernetes make a decision about which pod should serve the request?

I want to know the internals of kubernetes for this matter. Can we control this? Can we decide which pod should serve based on client requests and custom conditions?

-- Pert8S
kubectl
kubernetes
kubernetes-ingress
kubernetes-pod
minikube

2 Answers

5/15/2019

can we decide which pod should serve based on client requests and custom conditions?

As kube-proxy works on L4 load balancing stuff thus you can control the session based on Client IP. it does not read the header of client request.

you can control the session with the following field service.spec.sessionAffinityConfig in service obejct

following command provide the explanation kubectl explain service.spec.sessionAffinityConfig

Following paragraph and link provide detail answer.

Client-IP based session affinity can be selected by setting service.spec.sessionAffinity to “ClientIP” (the default is “None”), and you can set the max session sticky time by setting the field service.spec.sessionAffinityConfig.clientIP.timeoutSeconds if you have already set service.spec.sessionAffinity to “ClientIP” (the default is “10800”)-service-proxies

Service object would be like this

kind: Service
apiVersion: v1
metadata:
  name: my-service
spec:
  selector:
    app: my-app
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10000
-- Suresh Vishnoi
Source: StackOverflow

5/15/2019

Kubernetes service creates a load balancer(and an endpoint for it) and will use round robin by default to distribute requests among pods.

You can alter this behaviour. As Suresh said you can also use sessionAffinity to ensure that requests for a particular session value always go to the same pod.

-- Ankit Deshpande
Source: StackOverflow