Possible duplicate of :Spark: subtract two DataFrames if both datasets have exact same coulmns
If you want custom join condition then you can use "anti" join. Here is the pysaprk version 
Creating two data frames:
Dataframe1 :
l1 = [('col1_row1', 10), ('col1_row2', 20), ('col1_row3', 30)
df1 = spark.createDataFrame(l1).toDF('col1','col2')
df1.show()
+---------+----+
|     col1|col2|
+---------+----+
|col1_row1|  10|
|col1_row2|  20|
|col1_row3|  30|
+---------+----+
Dataframe2 :
l2 = [('col1_row1', 10), ('col1_row2', 20), ('col1_row4', 40)]
df2 = spark.createDataFrame(l2).toDF('col1','col2')
df2.show()
+---------+----+
|     col1|col2|
+---------+----+
|col1_row1|  10|
|col1_row2|  20|
|col1_row4|  40|
+---------+----+
Using subtract api : 
df_final = df1.subtract(df2)
df_final.show()
+---------+----+
|     col1|col2|
+---------+----+
|col1_row3|  30|
+---------+----+
Using left_anti : 
Join condition:
join_condition = [df1["col1"] == df2["col1"], df1["col2"] == df2["col2"]]
Join finally 
df_final = df1.join(df2, join_condition, 'left_anti')
df_final.show()
+---------+----+
|     col1|col2|
+---------+----+
|col1_row3|  30|
+---------+----+