Kubernetes cluster wont auto-scale down to default node counts as it scales up

8/29/2018

I have a three-node cluster, called node A, B, C, and it scales up because I have a deployment with HPA. When HPA scales up, the cluster nodes scales up too. But when HPA scales down to 1 pod(default), the cluster nodes will sometimes remain at 4 nodes.

I think the reason is that: When HPA scales down, it terminates pods until reach the min_replica. If the last pod lives in the scaled-up nodes(node D), It can't delete the last pod and rearrange it to original nodes(A, B or C). This will raise an issue: extra node fee

Is this an ordinary situation? Or I should do something to workaround it?

Thanks in advance.

-- user2963226
autoscaling
kubernetes

0 Answers