Following this post and this, here's my situation:
Users upload images to my backend, setup like so: LB -> Nginx Ingress Controller -> Django (Uwsgi). The image eventually will be uploaded to Object Storage. Therefore, Django will temporarily write the image to the disk, then delegate the upload task to a async service (DjangoQ), since the upload to Object Storage can be time consuming. Here's the catch: since my Django replicas and DjangoQ replicas are all separate pods, the file is not available in the DjangoQ pod. Like usual, the task queue is managed by a redis broker and any random DjangoQ pod may consume that task.
I need a way to share the disk file created by Django with DjangoQ.
The above mentioned posts basically mention two solutions:
-solution 1: NFS to mount the disk on all pods. It kind of seems like an overkill since the shared volume only stores the file for a few seconds until upload to Object Storage is completed.
-solution 2: the Django service should make the file available via an API, which DjangoQ would use to access the file from another pod. This seems nice but I have no idea how to proceed... should I create a second Django/uwsgi app as a side container which would listen to another port and send an HTTPResponse with the file? Can the file be streamed?
Third option: don't move the file data through your app at all. Have the user upload it directly to object storage. This usually means making an API which returns a pre-signed upload URL that's valid for a few minutes, user uploads the file, then makes another call to let you know the upload is finished. Then your async task can download it and do whatever.
Otherwise you have the two options correctly. For option 2, and internal Minio server is pretty common since again, Django is very slow for serving large file blobs.