How to scale a kubernetes deployment on GKE without using kubectl

7/11/2019

This may seem like a bit of an oddity, but I need to scale a kubernetes deployment that I have running on GKE, but where I'm initiating the call, I do not have access to kubectl.

So we have a VM that is running airflow (airflow is a tool we use for building automated data ETL pipelines). The team responsible for it don't want to give the VM access to GKE directly, and don't want kubectl installed on it. So I'm trying to think of a way around this limitation.

My current thinking is to use pub/sub and have airflow publish a notification that it wants the deployment to scale, but I'm not really sure what I need on the subscribe end to actually handle that? I've been looking into the operator SDK, and that's looking promising, but it's got me wondering do I need to go to the effort of building a custom operator and setting everything up, or is there something that already exists that I could use?

-- Andy
google-cloud-platform
google-kubernetes-engine
kubernetes

3 Answers

7/12/2019

Maybe deploying bastion host would be a good solution in your case. You can read about details of such approach in this article.

Bastion host provides an entry point of a K8S cluster (in this context) and gives other resource management capabilities. Typically this is a Google Compute Engine VM created in the same VPC and subnet. ... Since this VM is in the same VPC and the subnet IP range is whitelisted in the master access list of the K8S cluster, this VM can be used to manage the cluster. So this VM should have the Google Cloud SDK installed and the required tools such as kubectl.

-- mario
Source: StackOverflow

8/12/2019

My current thinking is to use pub/sub and have airflow publish a notification that it wants the deployment to scale, but I'm not really sure what I need on the subscribe end to actually handle that?

I imagine you could trigger a Cloud Function on a Pub/Sub message and then in the Cloud Function use something like the Kubernetes python client to scale your cluster.

A bit of a different approach would be to have your Airflow VM publish a custom metric to Stackdriver and then use that custom metric to configure autoscaling.

-- Aleksi
Source: StackOverflow

7/11/2019

Provided you have access(authentication credentials) to the kubernetes cluster, you can call the kubernetes api server with credentials and create horizontal pod autoscaling. You can also use python sdk to do the same.

-- Malathi
Source: StackOverflow