I am trying to process a relatively large (about 100k lines) csv file in python. This is what my code looks like:
#!/usr/bin/env python
import sys
reload(sys)
sys.setdefaultencoding("utf8")
import csv
import os
csvFileName = sys.argv[1]
with open(csvFileName, 'r') as inputFile:
    parsedFile = csv.DictReader(inputFile, delimiter=',')
     totalCount = 0
     for row in parsedFile:
         target = row['new']
         source = row['old']
         systemLine = "some_curl_command {source}, {target}".format(source = source, target = target)
         os.system(systemLine)
         totalCount += 1
         print "\nProcessed number: " + str(totalCount)
I'm not sure how to optimize this script. Should I use something besides DictReader?
I have to use Python 2.7, and cannot upgrade to Python 3.
 
     
    