I have a data.table, allData, containing data on roughly every (POSIXct) second from different nights. Some nights however are on the same date since data is collected from different people, so I have a column nightNo as an id for every different night.
          timestamp  nightNo    data1     data2
2018-10-19 19:15:00        1        1         7
2018-10-19 19:15:01        1        2         8
2018-10-19 19:15:02        1        3         9
2018-10-19 18:10:22        2        4        10
2018-10-19 18:10:23        2        5        11 
2018-10-19 18:10:24        2        6        12
I'd like to aggregate the data to minutes (per night) and using this question I've come up with the following code:
aggregate_minute <- function(df){
  df %>% 
    group_by(timestamp = cut(timestamp, breaks= "1 min")) %>%
    summarise(data1= mean(data1), data2= mean(data2)) %>%
    as.data.table()
 }
allData <- allData[, aggregate_minute(allData), by=nightNo]
However my data.table is quite large and this code isn't fast enough. Is there a more efficient way to solve this problem?
 
     
    