If I have the following Kubernetes objects:
Deployment
with rollingUpdate.maxUnavailable
set to 1
.PodDisruptionBudget
with maxUnavailable
set to 1
.HorizontalPodAutoscaler
setup to allow auto scaling.If the cluster was under load and is in the middle of scaling up, what happens:
Pod
's added due to the scale up use the new version of the Pod
?PodDisruptionBudget
stop the restart completely? Does the HorizontalPodAutoscaler
to scale up the number of nodes before taking down another node?Pod
affinity is set to avoid placing two Pod
's from the same Deployment
on the same node.Pods which are deleted or unavailable due to a rolling upgrade to an application do count against the disruption budget, but controllers (like deployment and stateful-set) are not limited by PDBs when doing rolling upgrades – the handling of failures during application updates is configured in the controller spec.
So it partially depends on the controller configuration and implementation. I believe new pods added by the autoscaler will use the new version of the Pod, because that's the version present in deployments definition at that point.
That depends on the way you execute the node restart. If you just cut the power, nothing can be done ;) If you execute proper drain
before shutting the node down, then the PodDisruptionBudget
will be taken into account and the draining procedure won't violate it. The disruption budget is respected by the Eviction API, but can be violated with low level operations like manual pod deletion. It is more like a suggestion, that some APIs respect, than a force limit that is enforced by whole Kubernetes.
According to the official documentation if the affinity is set to be a "soft" one, the pods will be scheduled on the same node anyway. If it's "hard", then the deployment will get stuck, not being able to schedule required amount of pods. Rolling update will still be possible, but the HPA won't be able to grow the pod pool anymore.