I have 4-5 Python scripts which are consumers/producers for Kafka and need to run till the end of time.
I can create container for each single script but then I will have 4-5 containers and Kubernetes will manage it if anything fails.
But I want to run all these in a Single Pod and if any of the script fails then that script should only be started and not the whole pod which in turn will start all the current transactions happening in perfectly running scripts.
Is there a way we can achieve it? Or what will be the best possible solution - Run individually in each container or run them in single container.
If in future more scripts come in then what can I do to achieve the desired behavior.
REASON in movig script in 1 pod - 1 script ~ 1 container ~ 400 MB. Now 5 script ~ 5 container ~ 2 GB.
This is not a suggested pattern! One process per container is the suggested one!
Running all the scripts in the same container will use the same amount of RAM more or less (~2GB) overhead of containers is very less.
You can run all 5 of the scripts using some init process like s6 https://github.com/just-containers/s6-overlay and s6 will take care of only starting the stopped script