I am learning the basics of multiprocessing and want to divide for loops into seperate processes to be processed on different cpu cores. I am planning to implement multiprocessing in an existing script, that does heavy computation.
Right now my approach looks like this:
import multiprocessing as mp
from functools import partial
def function(a, b, numbers):
return a*b
if __name__ == '__main__':
a = 1
b = 2
numbers = range(1000)
func_part = partial(function, a, b)
# Pool function
result = mp.Pool().map(func_part, numbers)
print(result)
# equivalent for loop
L = []
for i in numbers:
L.append(function(a, b, i))
print(L)
Is there a better approach in doing this?
Is it possible to get the iterator
numbersas first parameter of thefunctionwithout breaking it?numbershas to be passed lastly by themapfunction it seems.