Kubernetes Pods only assigned to one node when setting resource limits

10/12/2018

I run a service with pods that pick up tasks, process it and then finish. At the moment it is a testing environment, so there is no real CPU/memory usage of each pod but for the future, I want to set a limit for the pods.

Running all the pods(let's say 100) at once results in equal distribution on my two nodes(each with 2 CPUs and 2GB Memory) as expected.

For testing purposes I now set the limit of each pod:

    limits:
       memory: "1000Mi"
       cpu: "1"
     requests:
       memory: "1000Mi"
       cpu: "1"

Because the controllers/system are taking a bit of the available resources of the nodes, I would expect that on each node one pod is running until success and then the next is scheduled. In reality, only one node is used to process all the 100 pods one after another.

Does anybody know what might cause this behavior? There are no other limits set.

Thanks!

-- Fabian83
containers
docker
kubernetes
parallel-processing

1 Answer

10/17/2018

Finally I found out the problem was a wrong information given by the "kubectl describe node ..." command. This indicated more memory(1225076KB) than actually available on the node (0.87GB). Don't know why(especially because the setup of the two workers is identical but they still have different amounts of free memory), but this seemed to be the problem.

-- Fabian83
Source: StackOverflow