I have a large csv file with 20.9 GB, it has 16 columns and over 170 million rows.
My computer has 128 GB RAM and Python can use all that is available.
When I try to read just two columns using pandas.read_csv() with low_memory=False I'm getting 
ParserError: Error tokenizing data. C error: out of memory
I can read it fine with low_memory=True.
Can someone explain to me why this happens?
