I was playing around with benchmarking numpy arrays because I was getting slower than expected results when I tried to replace python arrays with numpy arrays in a script.
I know I'm missing something, and I was hoping someone could clear up my ignorance.
I created two functions and timed them
NUM_ITERATIONS = 1000
def np_array_addition():
    np_array = np.array([1, 2])
    for x in xrange(NUM_ITERATIONS):
        np_array[0] += x
        np_array[1] += x
def py_array_addition():
    py_array = [1, 2]
    for x in xrange(NUM_ITERATIONS):
        py_array[0] += x
        py_array[1] += x
Results:
np_array_addition: 2.556 seconds
py_array_addition: 0.204 seconds
What gives? What's causing the massive slowdown? I figured that if I was using statically sized arrays numpy would be at least the same speed.
Thanks!
Update:
It kept bothering me that numpy array access was slow, and I figured "Hey, they're just arrays in memory right? Cython should solve this!"
And it did. Here's my revised benchmark
import numpy as np
cimport numpy as np    
ctypedef np.int_t DTYPE_t    
NUM_ITERATIONS = 200000
def np_array_assignment():
    cdef np.ndarray[DTYPE_t, ndim=1] np_array = np.array([1, 2])
    for x in xrange(NUM_ITERATIONS):
        np_array[0] += 1
        np_array[1] += 1    
def py_array_assignment():
    py_array = [1, 2]
    for x in xrange(NUM_ITERATIONS):
        py_array[0] += 1
        py_array[1] += 1
I redefined the np_array to cdef np.ndarray[DTYPE_t, ndim=1]
print(timeit(py_array_assignment, number=3))
# 0.03459
print(timeit(np_array_assignment, number=3))
# 0.00755
That's with the python function also being optimized by cython. The timing for the python function in pure python is
print(timeit(py_array_assignment, number=3))
# 0.12510
A 17x speedup. Sure it's a silly example, but I thought it was educational.