Iterrows is very slow as you have seen.  Use merge, groupby, and filtering to find the relevant rows and change the ID for all rows at the same time.  You can use groupby and then count how many unique IDs there are per email.  Here's a toy example:
row1list = ['stack', '10']
row2list = ['overflow', '20']
row3list = ['overflow', '30']
df1 = pd.DataFrame([row1list, row2list, row3list], columns=['email', 'unique_ID'])
row1list = ['stack', '10']
row2list = ['overflow', '40']
df2 = pd.DataFrame([row1list, row2list], columns=['email', 'unique_ID'])
df_conflicting_ids = df1.groupby('email', as_index=False).agg({
    'unique_ID': lambda x: len(pd.Series.unique(x))})
df_conflicting_ids = df_conflicting_ids.rename(columns={'unique_ID':'unique_ID_count'})
df_conflicting_ids = df_conflicting_ids[df_conflicting_ids['unique_ID_count'] > 1]
print(df_conflicting_ids)
#       email  unique_ID_count
# 0  overflow                2
del df_conflicting_ids['unique_ID_count']  # don't need column anymore
df_conflicting_ids = df_conflicting_ids.merge(df2, on='email', how='left')
df_conflicting_ids = df_conflicting_ids.rename(columns={'unique_ID':'master_unique_ID'})
df1 = df1.merge(df_conflicting_ids, on='email', how='left')
df1.loc[df1['master_unique_ID'].notnull(), 'unique_ID'] = df1['master_unique_ID']
print(df1)
#       email unique_ID master_unique_ID
# 0     stack        10              NaN
# 1  overflow        40               40
# 2  overflow        40               40
del df1['master_unique_ID']  # don't need column anymore
I'm not sure if you want to drop duplicates after you overwrite the unique_IDs.  Also, you may want to store your unique_ID as integer, since you are testing after converting to integer.