I wrote a multiprocessing program in python. It can illustrate as follow:
nodes = multiprocessing.Manager().list()
lock = multiprocess.Lock()
def get_elems(node):
    #get elements by send requests
def worker():
    lock.acquire()
    node = nodes.pop(0)
    lock.release()
    elems = get_elems(node)
    lock.acquire()
        for elem in elems:
            nodes.append(node)
    lock.release()
if __name__ == "__main__":
    node = {"name":"name", "group":0}
    nodes.append(node)
    processes = [None for i in xrange(10)]
    for i in xrange(10):
        processes[i] = multiprocessing.Process(target=worker)
        processes[i].start()
    for i in xrange(10):
        processes[i].join()
At the beginning of the program run, it seems everything is okay. After run for a while. the speed of the program slow down. The phenomenon also exist when use multithreading. And I saw there is a Global Interpreter Lock in Python, So I change to multiprocessing. But still have this phenomenon. The complete code is in here. I have tried Cython, still have this phenomenon. Is there something wrong in my code? Or is there a birth defects in python about parallel?
 
    