I'm managing a piece of legacy code that uses float variables to manages amount of money and this cause some approximation issue. 
I know that this is not the correct way to represent money and that should be used the BigDecimal type but the refactoring of all the legacy code requires a lot of time and meanwhile I would like to understand what is the worst error introduced by this approximation? 
Also a link point to some theoretical document that explain in a detailed (but understandable) manner the problem (and how to estimate the worst case error)
Any help would be appreciated.
 
    