Basically, trying to figure out if there is better way or what the best practice is for my use case. I'll try my best to explain this.
virtualenv
to separate the app dependencies from the OS on my computerSECRET_KEY
, db credentials, API keys) are in the virtualenv
environment and read into the app with os.environ(<secret>)
python:3.7-slim
where I do not use virtualenv
because I don't need to isolate the app dependencies from the image OSSo all that being said, the issue I run into is the following and why I still need to use virtualenv
. Maybe it isn't an issue, but just strikes me as "irregular".
Sometimes I need to execute commands for Django:
python manage.py collectstatic
python manage.py migrate
python manage.py makemigrations
etc...
Some of these are fine to run with:
kubectl exec -it deploy/server-deployment -- python manage.py <command>
However, something like is not ok to run:
kubectl exec -it deploy/server-deployment -- python manage.py makemigrations
Why? Because the makemigrations
will be generated in the Pod which will be lost when the Pod is destroyed. Thus, I need to run makemigrations
locally so that I can push the migrations to Git, and doing it locally will trigger the images and containers to be rebuilt with the migrations in them.
That is why I still need to run virtualenv
locally: so I can run Django commands locally.
If I do not run virtualenv
locally two things will happen:
os.environ(<secret>)
references will be empty in the appSo it seems like I have to run virtualenv
locally and the thing that feels "wrong" to me is that I have to set environmental variables in two places:
virtualenv
where I just put them in the ./bin/activate
kubectl create secret
I can automate this with a script, but still feels like double entry.
What is the correct way or best practice for doing this?
I hope that makes sense.