I am confused by the performance in Pandas when building a large dataframe chunk by chunk. In Numpy, we (almost) always see better performance by preallocating a large empty array and then filling in the values. As I understand it, this is due to Numpy grabbing all the memory it needs at once instead of having to reallocate memory with every append operation.
In Pandas, I seem to be getting better performance by using the df = df.append(temp) pattern.
Here is an example with timing. The definition of the Timer class follows. As you, see I find that preallocating is roughly 10x slower than using append! Preallocating a dataframe with np.empty values of the appropriate dtype helps a great deal, but the append method is still the fastest.
import numpy as np
from numpy.random import rand
import pandas as pd
from timer import Timer
# Some constants
num_dfs = 10  # Number of random dataframes to generate
n_rows = 2500
n_cols = 40
n_reps = 100  # Number of repetitions for timing
# Generate a list of num_dfs dataframes of random values
df_list = [pd.DataFrame(rand(n_rows*n_cols).reshape((n_rows, n_cols)), columns=np.arange(n_cols)) for i in np.arange(num_dfs)]
##
# Define two methods of growing a large dataframe
##
# Method 1 - append dataframes
def method1():
    out_df1 = pd.DataFrame(columns=np.arange(4))
    for df in df_list:
        out_df1 = out_df1.append(df, ignore_index=True)
    return out_df1
def method2():
# # Create an empty dataframe that is big enough to hold all the dataframes in df_list
out_df2 = pd.DataFrame(columns=np.arange(n_cols), index=np.arange(num_dfs*n_rows))
#EDIT_1: Set the dtypes of each column
for ix, col in enumerate(out_df2.columns):
    out_df2[col] = out_df2[col].astype(df_list[0].dtypes[ix])
# Fill in the values
for ix, df in enumerate(df_list):
    out_df2.iloc[ix*n_rows:(ix+1)*n_rows, :] = df.values
return out_df2
# EDIT_2: 
# Method 3 - preallocate dataframe with np.empty data of appropriate type
def method3():
    # Create fake data array
    data = np.transpose(np.array([np.empty(n_rows*num_dfs, dtype=dt) for dt in df_list[0].dtypes]))
    # Create placeholder dataframe
    out_df3 = pd.DataFrame(data)
    # Fill in the real values
    for ix, df in enumerate(df_list):
        out_df3.iloc[ix*n_rows:(ix+1)*n_rows, :] = df.values
    return out_df3
##
# Time both methods
##
# Time Method 1
times_1 = np.empty(n_reps)
for i in np.arange(n_reps):
    with Timer() as t:
       df1 = method1()
    times_1[i] = t.secs
print 'Total time for %d repetitions of Method 1: %f [sec]' % (n_reps, np.sum(times_1))
print 'Best time: %f' % (np.min(times_1))
print 'Mean time: %f' % (np.mean(times_1))
#>>  Total time for 100 repetitions of Method 1: 2.928296 [sec]
#>>  Best time: 0.028532
#>>  Mean time: 0.029283
# Time Method 2
times_2 = np.empty(n_reps)
for i in np.arange(n_reps):
    with Timer() as t:
        df2 = method2()
    times_2[i] = t.secs
print 'Total time for %d repetitions of Method 2: %f [sec]' % (n_reps, np.sum(times_2))
print 'Best time: %f' % (np.min(times_2))
print 'Mean time: %f' % (np.mean(times_2))
#>>  Total time for 100 repetitions of Method 2: 32.143247 [sec]
#>>  Best time: 0.315075
#>>  Mean time: 0.321432
# Time Method 3
times_3 = np.empty(n_reps)
for i in np.arange(n_reps):
    with Timer() as t:
        df3 = method3()
    times_3[i] = t.secs
print 'Total time for %d repetitions of Method 3: %f [sec]' % (n_reps, np.sum(times_3))
print 'Best time: %f' % (np.min(times_3))
print 'Mean time: %f' % (np.mean(times_3))
#>>  Total time for 100 repetitions of Method 3: 6.577038 [sec]
#>>  Best time: 0.063437
#>>  Mean time: 0.065770
I use a nice Timer courtesy of Huy Nguyen:
# credit: http://www.huyng.com/posts/python-performance-analysis/
import time
class Timer(object):
    def __init__(self, verbose=False):
        self.verbose = verbose
    def __enter__(self):
        self.start = time.clock()
        return self
    def __exit__(self, *args):
        self.end = time.clock()
        self.secs = self.end - self.start
        self.msecs = self.secs * 1000  # millisecs
        if self.verbose:
            print 'elapsed time: %f ms' % self.msecs
If you are still following, I have two questions:
1) Why is the append method faster? (NOTE: for very small dataframes, i.e. n_rows = 40, it is actually slower).
2) What is the most efficient way to build a large dataframe out of chunks? (In my case, the chunks are all large csv files).
Thanks for your help!
EDIT_1: 
In my real world project, the columns have different dtypes. So I cannot use the pd.DataFrame(.... dtype=some_type) trick to improve the performance of preallocation, per BrenBarn's recommendation. The dtype parameter forces all the columns to be the same dtype [Ref. issue 4464]
I added some lines to method2() in my code to change the dtypes column-by-column to match in the input dataframes. This operation is expensive and negates the benefits of having the appropriate dtypes when writing blocks of rows.
EDIT_2: Try preallocating a dataframe using placeholder array np.empty(... dtyp=some_type). Per @Joris's suggestion. 
 
     
     
     
    