My problem is to execute something like :
multicore_apply(serie, func)
to run
So I tried to create a function doing it :
function used to run the apply method in a process :
def adaptator(func, queue) :
    serie = queue.get().apply(func)
    queue.put(serie)    
the process management :
def parallel_apply(ncores, func, serie) :
    series = [serie[i::ncores] for i in range(ncores)]
    queues =[Queue() for i in range(ncores)]
    for _serie, queue in zip(series, queues) :
        queue.put(_serie)
    result = []
    jobs = []    
    for i in range(ncores) :
        jobs.append(process(target = adaptator, args = (func, queues[i])))
    for job in jobs :
        job.start()
    for queue, job in zip(queues, jobs) :
        job.join()
        result.append(queue.get())
    return pd.concat(result, axis = 0).sort_index()
I know the i::ncores is not optimized but actually it's not the problem :
if the input len is greater than 30000 the processes never stop...
Is that a misunderstanding of Queue()?
I don't want to use multiprocessing.map : the func to apply is a method from a class very complex and with a pretty big size, so shared memory make it just too slow. Here I want to pass it in a queue when the problem of process will be solved.
Thank you for your advices
 
     
    