I am trying to process data stored in a text file which looks like this test.dat:  
-1411.85  2.6888   -2.09945   -0.495947   0.835799   0.215353   0.695579   
-1411.72  2.82683   -0.135555   0.928033   -0.196493   -0.183131   -0.865999   
-1412.53  0.379297   -1.00048   -0.654541   -0.0906588   0.401206   0.44239   
-1409.59  -0.0794765   -2.68794   -0.84847   0.931357   -0.31156   0.552622   
-1401.63  -0.0235102   -1.05206   0.065747   -0.106863   -0.177157   -0.549252   
....
....
The file however is several GB and I would very much like to read it in, in small blocks of rows. I would like to use numpy's loadtxt function as this converts everything quickly to a numpy array. However, I have not been able to manage so far as the function seems to only offer a selection of columns like here:
data = np.loadtxt("test.dat", delimiter='  ', skiprows=1, usecols=range(1,7))
Any ideas how to achieve this? If it is not possible with loadtxt any other options available in Python?
 
     
    