Is there a way to count the number of unique words in a stream with Flink Streaming? The results would be a stream of number which keeps increasing.
            Asked
            
        
        
            Active
            
        
            Viewed 2,779 times
        
    1 Answers
8
            You can solve the problem by storing all words which you've already seen. Having this knowledge you can filter out all duplicate words. The rest can then be counted by a map operator with parallelism 1. The following code snippet does exactly that.
val env = StreamExecutionEnvironment.getExecutionEnvironment
val inputStream = env.fromElements("foo", "bar", "foobar", "bar", "barfoo", "foobar", "foo", "fo")
// filter words out which we have already seen
val uniqueWords = inputStream.keyBy(x => x).filterWithState{
  (word, seenWordsState: Option[Set[String]]) => seenWordsState match {
    case None => (true, Some(HashSet(word)))
    case Some(seenWords) => (!seenWords.contains(word), Some(seenWords + word))
  }
}
// count the number of incoming (first seen) words
val numberUniqueWords = uniqueWords.keyBy(x => 0).mapWithState{
  (word, counterState: Option[Int]) =>
    counterState match {
      case None => (1, Some(1))
      case Some(counter) => (counter + 1, Some(counter + 1))
    }
}.setParallelism(1)
numberUniqueWords.print();
env.execute()
 
    
    
        Till Rohrmann
        
- 13,148
- 1
- 25
- 51
- 
                    Can it cause OOM or performance degradation if incoming stream is "infinity" and set of strings (in `filterWithState`) becoming too big? – Maxim Apr 01 '16 at 09:33
- 
                    1Not if you're using a state backend which supports out-of-core. The `RocksDBStateBackend` is such a state backend. If you use the memory state backend, then you have to clear the state once in a while, otherwise you might run into OOM. – Till Rohrmann Apr 01 '16 at 09:59
- 
                    Yet one question, as I understand the save/restore operations to/from `RocksDBStateBackend` backend in this case has complexity O(N) where N is count of elements in set i.e. this backend always save/restore all elements of Set or only changed elements? – Maxim Apr 01 '16 at 10:10
- 
                    2This implementation uses the `ValueState` abstraction which would always save/restore the complete `Set`. However, one could probably also use the `ListState` abstraction to make the checkpointing incremental. – Till Rohrmann Apr 01 '16 at 11:37
- 
                    Hi Till, is `filterWithState` only available in version 1.1? I couldn't find it in Flink 1.0.0. – Jun Apr 01 '16 at 13:10
- 
                    The `filterWithState` method is part of Flink's Scala API since version 1.0.0. However, it is only applicable to `KeyedStreams`. This means you have to call first `keyBy` on a stream. – Till Rohrmann Apr 01 '16 at 14:03
 
    