So I've been messing around with python's multiprocessing lib for the last few days and I really like the processing pool. It's easy to implement and I can visualize a lot of uses. I've done a couple of projects I've heard about before to familiarize myself with it and recently finished a program that brute forces games of hangman.
Anywho, I was doing an execution time compairison of summing all the prime numbers between 1 million and 2 million both single threaded and through a processing pool. Now, for the hangman cruncher, putting the games in a processing pool improved execution time by about 8 times (i7 with 8 cores), but when grinding out these primes, it actually increased processing time by almost a factor of 4.
Can anyone tell me why this is? Here is the code for anyone interested in looking at or testing it:
#!/user/bin/python.exe
import math
from multiprocessing import Pool
global primes
primes = []
def log(result):
    global primes
    if result:
        primes.append(result[1])
def isPrime( n ):
    if n < 2:
        return False
    if n == 2:
        return True, n
    max = int(math.ceil(math.sqrt(n)))
    i = 2
    while i <= max:
        if n % i == 0:
            return False
        i += 1
    return True, n
def main():
   global primes
   #pool = Pool()
   for i in range(1000000, 2000000):
       #pool.apply_async(isPrime,(i,), callback = log)
       temp = isPrime(i)
       log(temp)
   #pool.close()
   #pool.join()
   print sum(primes)
   return
if __name__ == "__main__":
    main()
It'll currently run in a single thread, to run through the processing pool, uncomment the pool statements and comment out the other lines in the main for loop.
 
    