I am working on a script that will write a massive amount of data to a .csv file.  To make the data transport among interested users easier, I would like to implement a limit on the number of rows per file.  For example, I would like for the first million records to be written to some_csv_file_1.csv and the second million records to be written to some_csv_file_2.csv, etc until all records have been written.
I have attempted to get the following to work:
import csv
csv_record_counter = 1
csv_file_counter = 1
while csv_record_counter <= 1000000:
    with open('some_csv_file_' + str(csv_file_counter) + '.csv', 'w') as csvfile:
        output_writer = csv.writer(csvfile, lineterminator = "\n")
        output_writer.writerow(['record'])
        csv_record_counter += 1
while not csv_record_counter <= 1000000:
    csv_record_counter = 1
    csv_file_counter += 1
Problem: As records increase beyond 1000000, the subsequent files are not created. The script continues to add records to the original file.
 
     
     
     
     
    