Avoid flooding your global environment with separate, similarly structured data frames in the first place. Instead continue to use a list of data frames. See @GregorThomas's best practices answer of why. In fact, a named list is preferable for better indexing.
# DEFINE A NAMED LIST OF DATA FRAMES
df_list <- list(corporate_service = corporate_service, 
                finance = finance, 
                its = its, 
                law = law, 
                market_services = market_services, 
                operations = operations, 
                president = president, 
                member_services = member_services, 
                system_planning = System_Planning)
# REMOVE ORIGINALS FROM GLOBAL ENVIRONMENT
rm(corporate_service, finance, its, law, market_services, 
   operations, president, member_services, System_Planning)
# REVIEW STRUCTURE
str(df_list)
Then define a method to interact with a single data frame (not list) and its list name. Then call it iteratively:
Calc <- function(df, nm) {
           df <- select(filter(df, Total_Flag == 1), Element, Amount, Total)       
           write.csv(df, file.path("path", "to", "my", "destination", paste(nm, ".csv")))
           return(df)           
        }
 
# ASSIGN TO A NEW LIST
new_df_list <- mapply(Calc, df_list, names(df_list), SIMPLIFY=FALSE)
new_df_list <- Map(Calc, df_list, names(df_list))    # EQUIVALENT WRAPPER TO ABOVE
To be clear, you lose no functionality of a data frame if it is stored in a larger container.
head(new_df_list$corporate_service)
tail(new_df_list$finance)
summary(new_df_list$its)
Such containers even help serialize same operations:
lapply(new_df_list, summary)
Even concatenate all data frame elements together with column of corresponding list name:
final_df <- dplyr::bind_rows(new_df_list, .id="division")
Overall, your organization and data management is enhanced since you only have to use a single, indexed object and not many that require ls, mget, get, eval, assign for dynamic operations.