Is is possible to schedule two (or more) Argo workflows on a Kubernetes cluster concurrently and share the cluster resources between the two 50/50? I'm looking for a resource awareness capability, if it exists in Argo or another workflow engine.
Cheers.
There is no built in way to guarantee a dynamic even split between the two processes. Nor am I aware of such a thing existing on any workflow engine as it sounds very hard to execute well.
What you can and should do as a best practice is specify the workflow compute requirements. You can specify memory and CPU request and limits. There is no reason to give a pod an "endless" amount of resources if it has a certain task to perform.
If you have a pod that could potentially require all available resources I recommend splitting it into several smaller workloads. then you could control other parameters that affect resource usage like concurrency through argo.