I have been using PySpark to run a large number of calculations in a fairly large RDD whose each block look like this:
ID  CHK  C1 Flag1   V1  V2  C2  Flag2   V3  V4  
341 10  100 TRUE    10  10  150 FALSE   10  14
341 9   100 TRUE    10  10  150 FALSE   10  14
341 8   100 TRUE    14  14  150 FALSE   10  14
341 7   100 TRUE    14  14  150 FALSE   10  14
341 6   100 TRUE    14  14  150 FALSE   10  14
341 5   100 TRUE    14  14  150 FALSE   10  14
341 4   100 TRUE    14  14  150 FALSE   12  14
341 3   100 TRUE    14  14  150 FALSE   14  14
341 2   100 TRUE    14  14  150 FALSE   14  14
341 1   100 TRUE    14  14  150 FALSE   14  14
341 0   100 TRUE    14  14  150 FALSE   14  14
I have many occurrences of the ID (it depends on C1 values, for instance from 100 to 130 and so on for many C1s, for each integer I have a set of 11 rows like the ones above) and I have many IDs. What I need to do is apply a formula in each rows' group and add two columns that will calculate:
D1 = ((row.V1 - prev_row.V1)/2)/((row.V2 + prev_row.V2)/2)
D2 = ((row.V3 - prev_row.V3)/2)/((row.V4 + prev_row.V4)/2)
What I did (as i found in this helpful article: https://arundhaj.com/blog/calculate-difference-with-previous-row-in-pyspark.html) is to define a window:
my_window = Window.partitionBy().orderBy(desc("CHK"))
and for each intermediate calculation I created a "temp" column:
df = df.withColumn("prev_V1", lag(df.V1).over(my_window))
df = df.withColumn("prev_V21", lag(df.TA1).over(my_window))
df = df.withColumn("prev_V3", lag(df.SSQ2).over(my_window))
df = df.withColumn("prev_V4", lag(df.TA2).over(my_window))
df = df.withColumn("Sub_V1", F.when(F.isnull(df.V1 - df.prev_V1), 0).otherwise((df.V1 - df.prev_V1)/2))
df = df.withColumn("Sub_V2", (df.V2 + df.prev_V2)/2)
df = df.withColumn("Sub_V3", F.when(F.isnull(df.V3 - df.prev_V3), 0).otherwise((df.V3 - df.prev_V3)/2))
df = df.withColumn("Sub_V4", (df.V4 + df.prev_V4)/2)
df = df.withColumn("D1", F.when(F.isnull(df.Sub_V1 / df.Sub_V2), 0).otherwise(df.Sub_V1 / df.Sub_V2))
df = df.withColumn("D2", F.when(F.isnull(df.Sub_V3 / df.Sub_V4), 0).otherwise(df.Sub_V3 / df.Sub_V4))
Lastly I got rid of the temp columns:
final_df = df.select(*columns_needed)
It took way to long and I kept getting:
WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
I know that I am not doing this properly as the code block above is inside a couple of for loops in order to do the calculations for all IDs, i.e. looping using:
unique_IDs = list(df1.toPandas()['ID'].unique())
but after looking into more on PySpark Window functions I believe that by setting correctly the window partitionBy() I could get the same result way easier.
I had a look on Avoid performance impact of a single partition mode in Spark window functions but still I am not sure how I could set my window partition correctly to make this work.
Can someone provide me some help or insight on how I could tackle this?
Thank you
 
    