I have a very large data file (my_file.dat) containing 31191984 rows of several variables. I would like to programmatically import this dataset into R in small parts e.g. data frames each of 1 million rows. At this link, it is suggested to use read.table() with nrows option. It works for the first round of 1 million rows using this command:
my_data <- read.table("path_to_my_file.dat", nrows = 1e+06)
How do I automate this procedure for the next rounds of 1 million rows until all parts are imported as R data frames? I am aware that one of the options could be to storing the data into SQL database and let R talk to SQL. However, I am looking for R specific solution only.