Why does k8s cluster-autoscaler (CA) taint something ToBeDeleted before it triggers the removal of the Node?

4/21/2020

I've seen it happen that the CA will mark nodes ToBeDeleted way before it triggers a Scale-Down. This makes the nodes Unschedulable. Then when new PODs need to come up there are less available locations due to the taint, which therefor forces the POD to be Unschedulable. Yet this then does not trigger a scale-up. Why does it add ToBeDelete before triggering? Why won't it remove the annotation once more PODs need the space if it hasn't already triggered the delete via the Provider?

For what it's worth: CA Version: 1.16.x K8s: 1.16.3

-- lucidquiet
autoscaling
kubernetes

0 Answers