I have a super large csv.gzip file that has 59 mill rows. I want to filter that file for certain rows based on certain criteria and put all those rows in a new master csv file. As of now, I broke the gzip file into 118 smaller csv files and saved them on my computer. I did that with the following code:
import pandas as pd
num = 0
df = pd.read_csv('google-us-data.csv.gz', header = None, 
compression =   'gzip', chunksize = 500000,
names = ['a','b','c','d','e','f','g','h','i','j','k','l','m'],
error_bad_lines = False, warn_bad_lines = False)
for chunk in df:
    num = num + 1
    chunk.to_csv('%ggoogle us'%num ,sep='\t', encoding='utf-8'
The code above worked perfectly and I now have a folder with my 118 small files. I then wrote code to go through the 118 files one by one, extract rows that matched certain conditions, and append them all to a new csv file that I've created and named 'google final us'. Here is the code:
import pandas as pd
import numpy
for i in range (1,118)
    file = open('google final us.csv','a')
    df = pd.read_csv('%ggoogle us'%i, error_bad_lines = False, 
    warn_bad_lines = False)
    df_f = df.loc[(df['a']==7) & (df['b'] == 2016) & (df['c'] =='D') & 
    df['d'] =='US')]
    file.write(df_f)
Unfortunately, the code above is giving me the below error:
KeyError                                  Traceback (most recent call last)
C:\Users\...\Anaconda3\lib\site-packages\pandas\indexes\base.py in
get_loc(self, key, method, tolerance)
   1875             try:
-> 1876                 return self._engine.get_loc(key)
   1877             except KeyError:
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4027)()
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:3891)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item  
(pandas\hashtable.c:12408)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item 
(pandas\hashtable.c:12359)()
KeyError: 'a'
During handling of the above exception, another exception occurred:
KeyError                                  Traceback (most recent call last)
<ipython-input-9-0ace0da2fbc7> in <module>()
      3 file = open('google final us.csv','a')
      4 df = pd.read_csv('1google us')
----> 5 df_f = df.loc[(df['a']==7) & (df['b'] == 2016) & 
      (df['c'] =='D') & (df['d'] =='US')]
      6 file.write(df_f)
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\frame.py in 
__getitem__(self, key)
   1990             return self._getitem_multilevel(key)
   1991         else:
-> 1992             return self._getitem_column(key)
   1993 
   1994     def _getitem_column(self, key):
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\frame.py in 
_getitem_column(self, key)
   1997         # get column
   1998         if self.columns.is_unique:
-> 1999             return self._get_item_cache(key)
   2000 
   2001         # duplicate columns & possible reduce dimensionality
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\generic.py in 
_get_item_cache(self, item)
  1343         res = cache.get(item)
  1344         if res is None:
-> 1345             values = self._data.get(item)
  1346             res = self._box_item_values(item, values)
  1347             cache[item] = res
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\internals.py in 
get(self, item, fastpath)
   3223 
   3224             if not isnull(item):
-> 3225                 loc = self.items.get_loc(item)
   3226             else:
   3227                 indexer = np.arange(len(self.items)) 
 [isnull(self.items)]
C:\Users\...\Anaconda3\lib\site-packages\pandas\indexes\base.py in 
get_loc(self, key, method, tolerance)
   1876                 return self._engine.get_loc(key)
   1877             except KeyError:
-> 1878                 return 
   self._engine.get_loc(self._maybe_cast_indexer(key))
   1879 
   1880         indexer = self.get_indexer([key], method=method, 
   tolerance=tolerance)
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4027)()
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:3891)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item 
(pandas\hashtable.c:12408)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item  
(pandas\hashtable.c:12359)()
KeyError: 'a'
Any ideas what's going wrong? I've read numerous other stackoverflow posts (eg. Create dataframes from unique value pairs by filtering across multiple columns or How can I break down a large csv file into small files based on common records by python), but still not sure how to do this. Also, if you have a better way to extract data than this method - please let me know!
 
     
     
     
    