I'm trying to implement multithreading to a very time consuming program, and I've come across this SO answer: https://stackoverflow.com/a/28463266/3451339, which basically offers this solution for multiple arrays:
from multiprocessing.dummy import Pool as ThreadPool
pool = ThreadPool(4)
results = pool.map(my_function, my_array)
# Close the pool and wait for the work to finish
pool.close()
pool.join()
and, passing multiple arrays:
results = pool.starmap(function, zip(list_a, list_b))
The following is the code I have so far which must be refactored with threading. It iterates over 4 arrays, and needs to pass arguments to the function at each iteration and append all results to a final container:
    strategies = ['strategy_1', 'strategy_2']
    budgets = [90,100,110,120,130,140,150,160]
    formations=['343','352','433','442','451','532','541']
    models = ['model_1', 'model_2', 'model_3']
    all_teams = pd.DataFrame()
    for strategy in strategies:
        for budget in budgets:
            for formation in formations:
                for model in models:
                    team = function(strategy=strategy, 
                                    budget=budget, 
                                    curr_formation=formation,
                                    model=model)
                       
                    all_teams = all_teams.append(team, ignore_index=True, sort=False)\
                                         .reset_index(drop=True)\
                                         .copy()
Note: Each function call makes api web requests.
What is the way to go with multithreading in this scenario?
 
     
    