There are a couple of things you need to do:
- Make sure you create the name of the file once, and share it across all processes. daveydave400 covered this in his answer.
- Make sure write operations across all processes are synchronized. If you don't do this, your worker processes will try to write to the log concurrently, which can result in one write stomping on another. Avoiding this was not covered in the other answer.
The logging module internally uses a threading.RLock to synchronize writes, but this is not process-safe: each process gets its own Rlock that knows nothing about the others. So, you need to create your own logging.Handler that uses a process-safe lock, and then share that lock with every process in your pool. We can do this by taking advantage of the initializer/initargs keyword arguments to multiprocessing.Pool; we just pass all the parameters we need to create identical loggers in all the worker processes to an initializer function, and then build a global logging.Logger object for each process.
Here's a working example:
import datetime, logging, multiprocessing, os.path, time
log = None
class ProcessFileHandler(logging.FileHandler):
    def __init__(self, *args, **kwargs):
        if 'lock' in kwargs:
            self._lock = kwargs['lock']
            del kwargs['lock']
        else:
            raise ValueError("No 'lock' keyword argument provided")
        super(ProcessFileHandler, self).__init__(*args, **kwargs)
    def createLock(self):
        return self._lock
def setup_logging(level, filename, format, datefmt, lock):
    global log  # Creates a global log variable in each process
    log = logging.getLogger()
    handler = ProcessFileHandler(filename, lock=lock)
    log.setLevel(level)
    fmt = logging.Formatter(fmt=format, datefmt=datefmt)
    handler.setFormatter(fmt)
    log.addHandler(handler)
def worker(n):
    log.info("before")
    time.sleep(n)
    log.info("after")
if __name__=='__main__':
    nproc = 40
    # initialize all the logging attributes we need
    format="%(asctime)-4s %(process)6s  %(message)s"
    datefmt="%m-%d %H:%M:%S"
    filename="test_%s.log"%(datetime.datetime.today().strftime("%Y%m%d-%H%M%S"))
    level=logging.DEBUG
    lock = multiprocessing.RLock()
    setup_logging(level, filename, format, datefmt, lock)  # Create one for this process
    pool = multiprocessing.Pool(processes=nproc, initializer=setup_logging, initargs=(level, filename, format, datefmt, lock))
    pool.map(worker, (6,)*nproc)