I run a service with pods that pick up tasks, process it and then finish. At the moment it is a testing environment, so there is no real CPU/memory usage of each pod but for the future, I want to set a limit for the pods.
Running all the pods(let's say 100) at once results in equal distribution on my two nodes(each with 2 CPUs and 2GB Memory) as expected.
For testing purposes I now set the limit of each pod:
limits:
memory: "1000Mi"
cpu: "1"
requests:
memory: "1000Mi"
cpu: "1"
Because the controllers/system are taking a bit of the available resources of the nodes, I would expect that on each node one pod is running until success and then the next is scheduled. In reality, only one node is used to process all the 100 pods one after another.
Does anybody know what might cause this behavior? There are no other limits set.
Thanks!
Finally I found out the problem was a wrong information given by the "kubectl describe node ..." command. This indicated more memory(1225076KB) than actually available on the node (0.87GB). Don't know why(especially because the setup of the two workers is identical but they still have different amounts of free memory), but this seemed to be the problem.