Scale down or drain node with running single replicas

11/18/2020

i am running single replicas of some applications with default maxsurge and maxunavilable 25%. using kind: deployment with rollingupdate replicas=1.

now if GKE scaling down the node it first deleting pod and after that creating a new pod on another node which creating some downtime for the application.

is there any solution? what is the best practices?

i can run multiple replicas of some services but not for all services.

what is the best way to handle this situation?

using node affinity? can anyone please share an example if possible?

thanks in advance.

-- chagan
google-kubernetes-engine
kubernetes
microservices

1 Answer

11/18/2020

I don't know exactly on GKE, but probably there is the same as in AWS.

You can configure the service to prevent scale in actions on certain nodes, so, pair that with nodeAffinity, and you can have a deployment running always on a particular node which is configured to not be removed during scaling events.

UPDATE

well, in fact, reading the docs, you have everything you need documented https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#scheduling-and-disruption

-- The Illusive Man
Source: StackOverflow