I am trying to extract regex patterns from a column using PySpark. I have a data frame which contains the regex patterns and then a table which contains the strings I'd like to match.
columns = ['id', 'text']
vals = [
 (1, 'here is a Match1'),
 (2, 'Do not match'),
 (3, 'Match2 is another example'),
 (4, 'Do not match'),
 (5, 'here is a Match1')
]
df_to_extract = sql.createDataFrame(vals, columns)
columns = ['id', 'Regex', 'Replacement']
vals = [
(1, 'Match1', 'Found1'),
(2, 'Match2', 'Found2'),
]
df_regex = sql.createDataFrame(vals, columns)
I'd like to match the 'Regex' column within the 'text' column of 'df_to_extract'. I'd like to extract the terms against each id with the resulting table containing the id and 'replacement' which corresponds to the 'Regex'. For example:
+---+------------+
| id| replacement|
+---+------------+
|  1|      Found1|
|  3|      Found2|
|  5|      Found1|
+---+------------+
Thanks!
 
    