Having as many Pods as Nodes

10/26/2020

We are currently using 2 Nodes, but we may need more in the future.

The StatefulSets is a mariadb-galera is current replica is at 2.

When we'll had a new Nodes we want the replica to be a 3, f we don't need it anymore and we delete it or a other Node we want it to be a 2.

In fact, if we have 3 Nodes we want 3 replica one on each Nodes.

I could use Pod Topology Spread Constraints but we'll have a bunch of "notScheduled" pods.

Is there a way to adapt the number of Replica automatically, every time a nodes is add or remove?

-- destroyed
kubernetes
mysql
replicaset

2 Answers

10/26/2020

When we'll had a new Nodes we want the replica to be a 3, f we don't need it anymore and we delete it or a other Node we want it to be a 2.

I would recommend to do it the other way around. Manage the replicas of your container workload and let the number of nodes be adjusted after that.

See e.g. Cluster Autoscaler for how this can be done, it depends on what cloud provider or environment your cluster is using.

It is also important to specify your CPU and Memory requests such that it occupy the whole nodes.

For MariaDB and similar workload, you should use StatefulSet and not DaemonSet.

-- Jonas
Source: StackOverflow

10/26/2020

You could use a Daemon Set https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

Which will ensure there is one pod per node.

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.

Also, its not advised to run a database in anything else than a statefulset due to the pod identity concept as statefulsets have.

Due to all the database administration it is advisable to use any cloud provider managed databases or managing it, specially inside the cluster will incur in multiple issues

-- paltaa
Source: StackOverflow