How to use multiple processes in python for a continuous workload

5/16/2019

I have a python application running inside of a pod in kubernetes which subscribes to a Google Pub/Sub topic and on each message downloads a file from a google bucket.

The issue I have is that I can't process the workload quickly enough using a single threaded Python application. I would normally run a number of pods to handle the workload but the problem is that all the files have to end up on the same filesystem to be processed by another application.

I have tried spawning a new thread for each request but the volume is too great.

What I would like to do is: 1) Have a number of processes that can process new messages 2) Keep the processes alive and use them to respond to new requests coming in.

All the examples for multiprocessing in python are single workload examples, for example providing 10 numbers to a square function, which isn't what I'm trying to achieve.

I've used gunicorn in the past which spawns a number of worker threads for a flask application, what I want is to do something similar without flask.

-- Adam
gunicorn
kubernetes
multiprocessing
python

1 Answer

5/16/2019

In the first, try to separate IO-bound (e.g. request, read/write and etc.) task from CPU-bound (parse JSON/XML, calculating and etc.) task. For IO-bound case use Threading or ThreadPoolExecutor primitives for auto reuse working thread. Keep attention, writing on disk is blocking function!

If you want to use parallelism for CPU-bound user Processing or ProcessPoolExecutor. For sync them you can use shared object (proxy object) or file or pipe or redis and etc.

Shared objects like Managers (Namespaces, dicts and etc.) is preferred if you want to use pure python.

For work with files to avoid blocking, use individual thread or use async. For asyncio use aiofile library.

-- RuS
Source: StackOverflow