HPA Implementation on single node kubernetes cluster

9/17/2019

I am running Kubernetes cluster on GKE. Running the monolithic application and now migrating to microservices so both are running parallel on cluster.

A monolithic application is simple python app taking the memory of 200Mb around.

K8s cluster is simple single node cluster GKE having 15Gb memory and 4vCPU.

Now i am thinking to apply the HPA for my microservices and monolithic application.

On single node i have also installed Graylog stack which include (elasticsearch, mongoDb, Graylog pod). Sperated by namespace Devops.

In another namespace monitoring there is Grafana, Prometheus, Alert manager running.

There is also ingress controller and cert-manager running.

Now in default namespace there is another Elasticsearch for application use, Redis, Rabbitmq running. These all are single pod, Type statefulsets or deployment with volume.

Now i am thinking to apply the HPA for microservices and application.

Can someone suggest how to add node-pool on GKE and auto scale. When i added node in pool and deleted old node from GCP console whole cluster restarted and service goes down for while.

Plus i am thinking to use the affinity/anti-affinity so can someone suggest devide infrastructure and implement HPA.

-- Harsh Manvar
docker
elasticsearch
google-cloud-platform
google-kubernetes-engine
kubernetes

1 Answer

9/17/2019

From the wording in your question, I suspect that you want to move your current workloads to the new pool without disruption.

Since this action represents a voluntary disruption, you can start by defining a PodDisruptionBudget to control the number of pods that can be evicted in this voluntary disruption operation:

A PDB limits the number of pods of a replicated application that are down simultaneously from voluntary disruptions.

The settings in the PDB depend on your application and your business needs, for a reference on the values to apply, you can check this.

Following this, you can drain the nodes where your application is scheduled since it will be "protected" by the budget and, drain uses the Eviction API instead of directly deleting the pods, which should make evictions graceful.

Regarding Affinity, I'm not sure how it fits in the beforementioned goal that you're trying to achieve. However, there is an answer of this particular regard in the comments.

-- yyyyahir
Source: StackOverflow