I have some stateless applications where I want one pod to be scheduled on each node (limited by a node selector). If I have 3 nodes in the cluster and one goes down then I should still have 2 pods (one on each node).
This is exactly what DaemonSets do, but DaemonSets have a couple of caveats to their usage (such as not supporting node draining, and tools such as Telepresence not supporting them). So I would like to emulate the behaviour of DaemonSets using Deployments.
My first idea was to use horizontal pod autoscaler with custom metrics, so the desired replicas would be equal to the number of nodes. But even after implementing this, it still wouldn't guarantee that one pod would be scheduled per node (I think?).
Any ideas on how to implement this?
If I have 3 nodes in the cluster and one goes down then I should still have 2 pods (one on each node).
I understand this as that you want to design your cluster for Availability. So the most important thing is that your replicas (pods) is spread on different nodes, to reduce the effect if a node goes down.
Use PodAntiAffinity
and topologyKey
for this.
deploy the redis cluster so that no two instances are located on the same host.
See Kubernetes documentation: Never co-located in the same node and the ZooKeeper High Availability example
You can consider the below combination