In the context of a discussion in the comments of this question it was mentioned that while concatenating a sequence of strings simply takes ''.join([str1, str2, ...]), concatenating a sequence of lists would be something like list(itertools.chain(lst1, lst2, ...)), although you can also use a list comprehension like [x for y in [lst1, lst2, ...] for x in y]. What surprised me is that the first method is consistently faster than the second:
import random
import itertools
random.seed(100)
lsts = [[1] * random.randint(100, 1000) for i in range(1000)]
%timeit [x for y in lsts for x in y]
# 39.3 ms ± 436 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit list(itertools.chain.from_iterable(lsts))
# 30.6 ms ± 866 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit list(x for y in lsts for x in y) # Proposed in comments
# 62.5 ms ± 504 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# Loop-based methods proposed in the comments
%%timeit
a = []
for lst in lsts: a += lst
# 26.4 ms ± 634 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%%timeit
a = []
for lst in lsts: a.extend(lst)
# 26.7 ms ± 728 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
It is not a difference of orders of magnitude, but it is not negligible either. I was wondering how that might be the case, since list comprehensions are frequently among the fastest methods to solve a given problem. At first I thought that maybe the itertools.chain object would have a len that the list constructor could use to preallocate the necessary memory, but that is not the case (cannot call len on itertools.chain objects). Is some custom itertools.chain-to-list conversion taking place somehow or is itertools.chain taking advantage of some other mechanism?
Tested in Python 3.6.3 on Windows 10 x64, if that is relevant.
EDIT:
It seems the fastest method after all is calling .extending an empty list with each list, as proposed by @zwer, probably because it works on "chunks" of data instead of on a per-element basis.