I need some help with dplyr.
I have two data frames - one huge, with several time series A,B,... in there (LargeDF), and a second one (Categories) with time intervals (left and right boundaries).
I would like to add another column to LargeDF, labeled leftBoundary, containing the appropriate boundary value, like so:
LargeDF
   ts timestamp   signal     # left_boundary
1   A 0.3209338 10.43279     # 0
2   A 1.4791524 10.34295     # 1
3   A 2.6007494 10.71601     # 2
and
Categories
   ts left right
1   A    0     1
2   A    1     2
3   A    2     3
My code I came up with is
LargeDF %>%
  group_by(ts) %>%
  do(myFUN(., Categories))
# calls this ...
myFUN <- function(Large, Categ) {
  CategTS <- Categ %>%
    filter(ts == Large[1, "ts"][[1]])
  Large %>%
    group_by(timestamp) %>%  # this is bothering me...
    mutate(left_boundary = CategTS$left[CategTS$left < timestamp 
                                         & timestamp < CategTS$right])
}
but it is super slow for large time series. I would really like to lose the group_by(timestamp), as they are unique within each ts anyways.
Does someone see a better solution? That would be much appreciated.
# Code for making the example data frames ...
library("dplyr")
n <- 10; series <- c("A", "B", "C")
LargeDF <- data.frame(
    ts        = rep(series, each = n)
  , timestamp = runif(n*length(series), max = 4)
  , signal    = runif(n*length(series), min = 10, max = 11)
) %>% group_by(ts) %>% arrange(timestamp)
m <- 7
Categories <- data.frame(
    ts    = rep(series, each = m)
  , left  = rep(seq(1 : m) - 1, length(series))
  , right = rep(seq(1 : m), length(series))
)
Update (data.table and my slightly modified mockup)
So, I tried the suggestions from @DavidArenburg on a quick/dirty mockup-example first, but had the problem that some timestamps were binned twice (into successive categories/intervals).
> foverlaps(d, c, type="any", by.x = c("timestamp", "timestamp2"))
    left right     value timestamp timestamp2
 1:  0.9   1.9 0.1885459         1          1
 2:  0.9   1.9 0.0542375         2          2  # binned here
 3:  1.9   2.9 0.0542375         2          2  # and here as well
13: 19.9  25.9 0.4579986        20         20
I then read about minoverlap = 1L as a default and realized that a normal timestamp is >> 1.
> as.numeric(Sys.time())
[1] 1429022267
Therefore, if I shifted everything to larger values (e.g. n <- 10 in the example below), everything went fine.
   left right      value timestamp timestamp2
1:    9    19 0.64971126        10         10
2:   19    29 0.75994751        20         20
3:   29    99 0.98276462        30         30
9:  199   259 0.89816165       200        200
With my real data, everything went smoothly, so thanks again.
## Code for my data.table example -----
n <- 1
d <- data.table( value     = runif(9),
                 timestamp = c(1, 2, 3, 5, 7, 10, 15, 18, 20)*n,
                timestamp2 = c(1, 2, 3, 5, 7, 10, 15, 18, 20)*n)
c <- data.table(left  = c(0.9, 1.9, 2.9,  9.9, 19.9, 25.9)*n,
                right = c(1.9, 2.9, 9.9, 19.9, 25.9, 33.9)*n)
setkey(c, left, right)
foverlaps(d, c, type="any", by.x = c("timestamp", "timestamp2"))
Update 2 (JOIN, then FILTER, within dplyr)
I tested the suggestion from @aosmith to use the dplyr function left_join() to create one (very) large DF, then filter() this again. Very quickly, I ran into memory issues:
Error: std::bad_alloc
Probably, this approach would be a good idea for smaller tables - as the syntax is very nice (but this, again, is personal preference). I'll go for the data.table solution in this case. Thanks again for all suggestions.
 
    