I want to start two different python scripts (tensorflow object detection train.py and eval.py) in parallel on different GPUs, and when train.py is completed, kill eval.py.
I have the following code to start two subprocesses in parallel (How to terminate a python subprocess launched with shell=True). But the subprocesses are started on the same device (I can guess why. I just don’t know how to start them on different devices).
start_train = “CUDA_DEVICE_ORDER= PCI_BUS_ID CUDA VISIBLE_DEVICES=0 train.py ...”
start_eval = “CUDA_DEVICE_ORDER= PCI_BUS_ID CUDA VISIBLE_DEVICES=1 eval.py ...”
commands = [start_train, start_eval]
procs = [subprocess.Popen(i, shell=True, stdout=subprocess.PIPE, preexec_fn=os.setsid) for i in commands]
After this point I don’t know how to proceed. Do I need something like below? Should I use p.communicate() instead to avoid deadlocks? Or is it enough if I just call wait() or communicate() for train.py as I need only its completion.
for p in procs:
    p.wait() # I assume this command won’t affect the parallel running
Then I need to use the following command somehow. I don’t need a return value from train.py, but a return code from subprocess alone. Popen.returncode documentation wait() and communicate() look like needing a return code setting. I don’t understand how to set this. I prefer something like
if train is done without any error:
    os.killpg(os.getpgid(procs[1].pid), signal.SIGTERM) 
else:
    write the error to the console, or to a file (but how?)
OR?
train_return = proc[0].wait() 
if train_return == 0:
    os.killpg(os.getpgid(procs[1].pid), signal.SIGTERM) 
UPDATE AFTER SOLVING THE PROBLEM:
This is my main:
if __name__ == "__main__":
    exp = 1
    go = True
    while go:
        create_dir(os.path.join(MAIN_PATH,'kitti',str(exp),'train'))
        create_dir(os.path.join(MAIN_PATH,'kitti',str(exp),'eval'))
        copy_tree(os.path.join(MAIN_PATH,"kitti/eval_after_COCO"), os.path.join(MAIN_PATH,"kitti",str(exp),"eval"))
        copy_tree(os.path.join(MAIN_PATH,"kitti/train_after_COCO"), os.path.join(MAIN_PATH,"kitti",str(exp),"train"))
        err_log = open('./kitti/'+str(exp)+'/error_log' + str(exp) + '.txt', 'w')
        train_command = CUDA_COMMAND_PREFIX + "0 python3 " + str(MAIN_PATH) + "legacy/train.py \
                                            --logtostderr --train_dir " + str(MAIN_PATH) + "kitti/" \
                                            + str(exp) + "/train/ --pipeline_config_path " + str(MAIN_PATH) \
                                            + "kitti/faster_rcnn_resnet101_coco.config"
        eval_command = CUDA_COMMAND_PREFIX + "1 python3 " + str(MAIN_PATH) + "legacy/eval.py \
                                            --logtostderr --eval_dir " + str(MAIN_PATH) + "kitti/" \
                                            + str(exp) + "/eval/ --pipeline_config_path " + str(MAIN_PATH) \
                                            + "kitti/faster_rcnn_resnet101_coco.config --checkpoint_dir " + \
                                            str(MAIN_PATH) + "kitti/" + str(exp) + "/train/"
        os.system("python3 dataset_tools/random_sampler_with_replacement.py --random_set_id " + str(exp))
        time.sleep(20)
        update_train_set(exp)
        train_proc = subprocess.Popen(train_command,
                                  stdout=subprocess.PIPE,
                                  stderr=err_log, # write errors to a file
                                  shell=True)
        time.sleep(20)      
        eval_proc = subprocess.Popen(eval_command,
                                 stdout=subprocess.PIPE,
                                 shell=True)
        time.sleep(20)
        if train_proc.wait() == 0: # successfull termination
            os.killpg(os.getpgid(eval_proc.pid), subprocess.signal.SIGTERM)
        clean_train_set(exp)
        time.sleep(20)
        exp += 1
        if exp == 51:
            go = False
 
    