Each function (func1, etc) makes a request to a different url:
def thread_map(ID):
    func_switch = \
        {
            0: func1,
            1: func2,
            2: func3,
            3: func4
        }
    with ThreadPoolExecutor(max_workers=len(func_switch)) as threads:
        futures = [threads.submit(func_switch[i], ID) for i in func_switch]
        results = [f.result() for f in as_completed(futures)]
        for df in results:
            if not df.empty and df['x'][0] != '':
                return df
        return pd.DataFrame()
This is much faster (1.75 sec) compared to a for loop (4 sec), but the results are unordered.
- How can each function be executed parallely while allowing to check the resultsby order of execution?
Preferably as background processes/threads returning the according dataframes starting with func1. So if the conditions for func1 are not met, check func2 and so on for the criteria given the results have already been fetched in the background. Each dataframe is different, but they all contain the same common column x.
Any suggestions are highly appreciated plus I hope ThreadPoolExecutor is appropriate for this scenario. Thanks!
 
    