How to dynamically scale a service in Openshift ? A Challenging scenario

3/5/2019

I'm currently trying to deploy a backend service API for my application in Openshift, which needs to be scalable such that each of the request has to run in a new pod.

Service will take 5 minutes to serve single request. I have to hit the service for 700 times.

Is there a way I can create 700 pods to serve the 700 request and scaled down it to 1 after all the requests are completed ?

Start of the application: 1 pod <- 700 requests

Serving: 700 pod serves one request each

End of the application: 1 pod

-- Raja Ayyavu
autoscaling
kubernetes
openshift

1 Answer

3/5/2019

Autoscaling in Kubernetes relies on metrics. From what I know Openshift supports CPU and Memory utilization.

But I don't think this is what you are looking for.

I think you should be looking into Jobs - Run to Completion.

Each request will spawn a new Job which will run until it's completed.

Example:

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4

This will run a job which computes π to 2000 places and prints it out.

-- Crou
Source: StackOverflow