I have a bunch of large tab-delimited text files, with a format similar to:
a   0.0694892   0   0.0118814   0   -0.0275522  
b   0.0227414   -0.0608639  0.0811518   -0.15216    0.111584    
c   0   0.0146492   -0.103492   0.0827939   0.00631915
To count the number of columns I have always used:
>>> import numpy as np
>>> np.loadtxt('file.txt', dtype='str').shape[1]
6
However, this method is obviously not efficient for bigger files, as the entire file content is loaded into the array before getting the shape. Is there a simple method, which is more efficient?
 
     
    