In one of my project, we have 9 PODs for a single microservice, and during the load test run that is noticing the load distribution across the 9 PODs are not even. Also, on the PODs (which have received low traffic compared to other PODs) there is a gap between the traffics. Has anyone faced this issue and advise the areas / spaces that could cause this
All 9 PODs are hosted on a different node under the same cluster and we have 3 zones
The load balancer algorithm used is round-robin.
Sample flow: microservices 1 (is running in 3 PODs, which uses Nginx but not as a load balancer) -> microservices 2 (is running 9 PODs, which uses node js)
Another flow: microservices 1 (is running in 6 PODs) -> microservices 2 (running in 9 PODs)
Refer to the below screenshots,
As far as Kubernetes is concerned, the LB distributes requests at the node level and not at the pod and it will completely disregard the number of pods on each node. Unfortunately, this is a limitation on Kubernetes. You may also have a look at the last paragraph of this documentation about traffic not equally load balanced across pods. 1
Defining resources for containers 2 is important as it allows the scheduler to make better decisions when it comes time to place pods in nodes. May I recommend to have a look at the following documentation 3 on how pods with resource limits are set. It is mentioned that a pod will not be allowed to exceed its CPU limit for an extended period of time and it will not be killed, eventually leading to a decreased performance.
1 https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer
2 https://kubernetes.io/docs/concepts/configuration/manage-resources-containers
3https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run
Regards, Anbu.