I ran my first multiprocessing code. The test code is shown below. In my test I just ran two processes to see if it produced the results as expected, which it did.
I now want to run it for 'real'. My computer has 8 cores & I want to run approx. 100 processes. My question is if I run the code below and it creates 100 processes, do I need to specify the max number of processes to run at one time or does the code in the background do some clever stuff and realise that there are only 8 cores and optimise accordingly?
 if __name__ == '__main__':
    # set up the data
    df_data = Somefunc()   
    pickled_df = pickle.dumps(df_data)
    size = len(pickled_df)
    # create a shared memory
    shm = shared_memory.SharedMemory(create=True, size=size)
    shm.buf[:size] = pickled_df
    # Notice that we only pass the name of the block, not the block itself
    processes = [Process(target=run_func, args=(shm.name, x)) for x in range(1, 3)]
    [p.start() for p in processes]
    [p.join() for p in processes]
    shm.close()
    # Unlink should only be called once on a memory block
    shm.unlink()
 
    