13

I have written a method that must consider a random number to simulate a Bernoulli distribution. I am using random.nextDouble to generate a number between 0 and 1 then making my decision based on that value given my probability parameter.

My problem is that Spark is generating the same random numbers within each iteration of my for loop mapping function. I am using the DataFrame API. My code follows this format:

val myClass = new MyClass()
val M = 3
val myAppSeed = 91234
val rand = new scala.util.Random(myAppSeed)

for (m <- 1 to M) {
  val newDF = sqlContext.createDataFrame(myDF
    .map{row => RowFactory
      .create(row.getString(0),
        myClass.myMethod(row.getString(2), rand.nextDouble())
    }, myDF.schema)
}

Here is the class:

class myClass extends Serializable {
  val q = qProb

  def myMethod(s: String, rand: Double) = {
    if (rand <= q) // do something
    else // do something else
  }
}

I need a new random number every time myMethod is called. I also tried generating the number inside my method with java.util.Random (scala.util.Random v10 does not extend Serializable) like below, but I'm still getting the same numbers within each for loop

val r = new java.util.Random(s.hashCode.toLong)
val rand = r.nextDouble()

I've done some research, and it seems this has do to with Sparks deterministic nature.

Brian
  • 7,098
  • 15
  • 56
  • 73

4 Answers4

18

Just use the SQL function rand:

import org.apache.spark.sql.functions._

//df: org.apache.spark.sql.DataFrame = [key: int]

df.select($"key", rand() as "rand").show
+---+-------------------+
|key|               rand|
+---+-------------------+
|  1| 0.8635073400704648|
|  2| 0.6870153659986652|
|  3|0.18998048357873532|
+---+-------------------+


df.select($"key", rand() as "rand").show
+---+------------------+
|key|              rand|
+---+------------------+
|  1|0.3422484248879837|
|  2|0.2301384925817671|
|  3|0.6959421970071372|
+---+------------------+
David Griffin
  • 13,677
  • 5
  • 47
  • 65
  • 1
    This didn't quite solve my problem, but its an elegant solution that I will likely be using in the future, so +1 – Brian Apr 06 '16 at 20:33
6

According to this post, the best solution is not to put the new scala.util.Random inside the map, nor completely outside (ie. in the driver code), but in an intermediate mapPartitionsWithIndex:

import scala.util.Random
val myAppSeed = 91234
val newRDD = myRDD.mapPartitionsWithIndex { (indx, iter) =>
   val rand = new scala.util.Random(indx+myAppSeed)
   iter.map(x => (x, Array.fill(10)(rand.nextDouble)))
}
leo9r
  • 2,037
  • 26
  • 30
  • 1
    Had to maintain a code that used this solution and want to share with the community that this solution has its downsides and probably will badly influence your statistical analysis, please be aware of. When you have an rdd that has partition>1, your rdd sequence of random numbers will start over again for each partition with new seed and different numbers, but it may change the 'characteristics' of the whole sequence. My advice: don't use this approach. – d-xa Sep 07 '21 at 09:45
  • @d-xa thanks for you comment. Could you recommend an alternative approach? – leo9r Sep 23 '21 at 17:38
  • if one uses this approach I would suggest to fix the partition for myRDD to 1 – d-xa Sep 24 '21 at 08:32
5

The reason why the same sequence is repeated is that the random generator is created and initialized with a seed before the data is partitioned. Each partition then starts from the same random seed. Maybe not the most efficient way to do it, but the following should work:

val myClass = new MyClass()
val M = 3

for (m <- 1 to M) {
  val newDF = sqlContext.createDataFrame(myDF
    .map{ 
       val rand = scala.util.Random
       row => RowFactory
      .create(row.getString(0),
        myClass.myMethod(row.getString(2), rand.nextDouble())
    }, myDF.schema)
}
Pascal Soucy
  • 1,317
  • 7
  • 17
  • I modified this slightly to solve my problem. I passed the Random val into my method and generated random numbers from within there. This solved my problem, but I had to use `java.util.Random` for serializeability reasons. – Brian Apr 06 '16 at 20:34
1

Using Spark Dataset API, perhaps for use in an accumulator:

df.withColumn("_n", substring(rand(),3,4).cast("bigint"))