Why GKE can allocate 2 pod replicas inside 1 node (with pool with 2 nodes)?

9/21/2019

I'm new with GKE and I'm testing some scaling features. I started with a simple example, 1 pod inside 1 pool with 1 node.

When I scaled the pool to have 2 nodes and the pod to replica=2, for my surprise the 2 pods were allocated inside the same node.

Is it a problem for redundancy? Can I assure that my replicas are spread to all nodes?

-- Marcelo Dias
gcloud
google-kubernetes-engine
kubernetes

1 Answer

9/21/2019

The place where Pods are scheduled is decided by the Kubernetes scheduler. As mentioned in the documentation, the scheduler first finds eligible nodes in a filtering phase. Following that, the scheduler finds the most adequate node using scoring criteria mentioned in this section. Among other factors, image locality and fitting Pods into as few nodes as possible could be the reason both Pods were allocated on the same node.

Is it a problem for redundancy? Can I assure that my replicas are spread to all nodes?

This can be an issue for redundancy. If one Node, goes out, then your entire service becomes unavailable (if you use resources like Deployments and such, they will eventually be scheduled on the other node though).

In order to favor Pod spread among nodes, you can customize the scheduler or use mechanisms such as affinity and anti-affinity.

-- Alassane Ndiaye
Source: StackOverflow