How to deploy workload with K8s on-demand (GKE)?

10/22/2020

I need to deploy a GPU intensive task on GCP. I want to use a Node.js Docker image and within that container to run a Node.js server that listens to HTTP requests and runs a Python image processing script on-demand (every time that a new HTTP request is received containing the images to be processed). My understanding is that I need to deploy a load balancer in front of the K8s cluster that has a static public IP address which then builds/launches containers every time a new HTTP request comes in? And then destroy the container once processing is completed. Is container re-use not a concern? I never worked with K8s before and I want to understand how it works and after reading the GKE documentation this is how I imagine the architecture. What am I missing here?

-- Asdasdprog
google-cloud-platform
google-kubernetes-engine
kubernetes

1 Answer

10/22/2020

runs a Python image processing script on-demand (every time that a new HTTP request is received containing the images to be processed)

This can be solved on Kubernetes, but it is not a very common kind of workload.

The project that support your problem best is Knative with its per-request auto-scaler. Google Cloud Run is the easiest way to use this. But if you want to run this within your own GKE cluster, you can enable it.

That said, you can also design your Node.js service to integrate with the Kubernetes API-server to create Jobs - but it is not a good design to have common workload talk to the API-server. It is better to use Knative or Google Cloud Run.

-- Jonas
Source: StackOverflow