I'm calling a function with large memory overhead on a list of N files.  The reasons for the large memory overhead are due to a number of factors that cannot be resolved without modifying the function, however I have overcome the leaking memory using the multiprocessing module. By creating a subprocess for each of the N files, and then calling pool.close(), the memory from the function is released with minimal overhead.  I have achieved this in the following example:
def my_function(n):
    do_something(file=n)
    return 
if __name__ == '__main__':
    # Initialize pool
    for n in range(0,N,1):
        pool = mp.Pool(processes=1)
        results = pool.map(my_function,[n])
        pool.close()
        pool.join()
This does exactly what I want: by setting processes=1 in pool, one file is run at a time for  N files. After each n file, I call pool.close(), which closes the process and releases the memory back to the OS.  Before, I didn't use multiprocessing at all, just a for loop, and the memory would accumulate until my system crashed.
My questions are
- Is this the correct way to implement this?
 - Is there a better way to implement this?
 - Is there a way to run more than one process at a time (
processes>1), and still have the memory released after eachn? 
I'm just learning about the multiprocessing module.  I've found many multiprocessing examples here, but none specific to this problem.  I'll appreciate any help I can get.