I have three demonset pods which contain a container of hadoop resource manager in each pod. One of three is active node. And the other two are standby nodes. So there is two quesion:
Consider the following: Deployments
, DaemonSets
and ReplicaSets
are abstractions meant to manage a uniform group of objects.
In your specific case, although you're running the same application, you can't say it's a uniform group of object as you have two types: active and standby objects.
There is no way for telling Kubernetes which is which if they're grouped in what is supposed to be an uniform set of objects.
As suggested by @wolmi, having them in a Deployment
instead of DaemonSet still leaves you with the issue that deployment strategies can't individually identify objects to control when they're updated because of the aforementioned logic.
My suggestion would be that, additional to using a Deployment
with node affinity
to ensure a highly available environment, you separate active and standby objects in different Deployments/Services
and base your rolling update strategy on that scenario.
This will ensure that you're updating the standby nodes first, removing the risk of updating the active nodes before the other.
I think this is not the best way to do that, totally understand that you use Daemonset to be sure that Hadoop exists on an HA environment on per node but you can have that same scenario using a deployment and affinity parameters more concrete the pod affinity
, then you can be sure only one Hadoop node exists per K8S node.
With that new aproach, you can use a replication-controller to control the rolling-update, some resources from the documentation:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity https://kubernetes.io/docs/tasks/run-application/rolling-update-replication-controller/