I have a Kubernetes cluster i.e "Cluster1" with auto-scaling to a maximum of 2 nodes & a minimum of 1 node. I am trying to understand the digital ocean behavior for auto-downgrading the nodes in the following scenario.
- The "Cluster1" cluster has an Nginx as the ingress-controller, which was added as part of the "1-click setup" during cluster provisioning.
- This cluster has auto-scaling configured as 1 min node & 2 max nodes. Let's call them Node1 & Node2.
- This cluster is behind a digital ocean load balancer i.e LB1 which talks to ingress controller i.e pod running Nginx.
- Let's say there is a single replica (replica:1) deployment controller of "image1" which requires 80% of the CPU.
- Initially, the image1 is deployed & since there is resource availability, the image1 starts running on Node1.
- Consider the image1 is updated to image2, upstream. The deployment controller will see there's node unavailability & will provision Node2, & will create another pod running image2 on Node2, the pod running image1 will start to terminate once the image2 is up and running.
- LB1 updates the routing to Node1, Node2.
- Now after the pod (on Node1) for image1 is terminated because the replica:1 is set in the deployment controller, the Node1 is not running anything from the user perspective.
- Ideally, there should be an automatic de-provisioning of the node i.e Node1.
- I tried to manually remove the Node1 from the cluster using the DO dashboard.
- LB1 updates & shows single node availability, but shows the status as down.
- Upon investigating I found that "nginx-controller" was running only on Node1. When the Node1 is terminated, the "nginx-controller" takes a while to provision a new pod on then available Node2. There is however downtime all this while.
My question is how-to best use auto-scaling for downgrading. I have a few solutions I thought.
- Is it possible to run "nginx-controller" on all nodes?
or
- If I drain the node from the "kubectl", i.e kubectl drain , and then delete manually delete the node from the dashboard, there shouldn't any downtime?. or just doing kubectl drain, will make DigitalOcean auto downgrade.