Let's say I have some task.py that takes 5 minutes to run.
I need to run task.py with 1,000 different inputs. Each run of task.py is completely independent and low memory. I don't care that they all finish at the same time, just that they finish.
I know I could use multi-processing or multithreading, but is there any good reason to not do the folloiwng:
import subprocess
import sys
import numpy as np
for arg in np.arange(0, 1.01, .1):  
    print(contrib)
    pid = subprocess.Popen(
        [sys.executable, "C:/task.py", "--arg", str(arg)])
 
    