Kubernetes load balancer without downtime

12/21/2018

There is something that I'm missing about the load balancer service.
If I run a load balancer and my load gets spread over say 3 pods, do you have any guarantee that in case of multiple nodes, the same "type" of pods will be spread evenly over the nodes within the cluster?

If I understand it right, Kubernetes will try to spread different kind of pods over the nodes in order to achieve the most optimal use of resources.
But does this guarantee that pods exposing the same application will be spread evenly too?

The replication controller will take care that the a certain amount of pods is always running, but what happens in case of a node failure; let's say 1 node's network interface goes down, and 3 pods of the same type were scheduled on that node. In that case the rc will take care that they are up again on a different node, but how do you know that there won't be a temporary outage of that api? I imagine that when using a load balancer, this can be prevented?

-- Trace
kubernetes

2 Answers

12/21/2018

Kubernetes tries to distribute the load of all his nodes. So it could be that you deploy all the pods in the same node. As you say, in the event that the node fails, it would leave all your pods inaccessible.

As a developer, if you need to have pods distributed among different nodes, you have tools for it:

There are other ways to achieve your objective like using affinity and anti-affinity or inter-pod affinity and anti-affinity. Here you have all the solutions that kubernetes offer in the documentation: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/

-- pcampana
Source: StackOverflow

12/21/2018

Yes and no.

Kube scheduler will try to spread your pods on best effort basis, but there are situations that will cause scheduling on the same node. So "no".

But you can use multiple features of kubernetes to achieve that, like pod anti affinity, daemon sets, host ports, descheduler etc. So "yes" if you know what you need exactly and how to achieve it.

-- Radek 'Goblin' Pieczonka
Source: StackOverflow