I'm working with a large data frame, but for simplicity’s sake, let's say I have a data frame with columns labeled: year, stat1, stat2 . . . statn
   year    stat1    stat2 . . . statn
0  1970   #          #      . . .  #
1  1971   #          #      . . .  #
2  1972   #          #      . . .  #
3  1973   #          #      . . .  #
. . .
997  2020    #          #      . . .  #
998  2021    #          #      . . .  #
999   2022    #          #      . . .  #
The year columns span from 1970-2022, and repeats after running through the whole iteration. So there are several 1970 rows, 1971 rows, 2022 rows, ect. But there is dropped missing data, so the pattern doesn't perfectly repeat.
What I am trying to do is to merge all duplicate years rows, and average all their data points (stat1, stat2 . . . statn). So the new modified DataFrame only has 52 rows (1970 - 2022), with all their data points having been averaged.
 
     
    