I have this "scientific application" in which a Single value should be rounded before presenting it in the UI. According to this MSDN article, due to "loss of precision" the Math.Round(Double, Int32) method will sometimes behave "unexpectedly", e.g. rounding 2.135 to 2.13 rather than 2.14.
As I understand it, this issue is not related to "banker's rounding" (see for example this question).
In the application, someone apparently chose to address this issue by explicitly converting the Single to a Decimal before rounding (i.e. Math.Round((Decimal)mySingle, 2)) to call the Math.Round(Decimal, Int32) overload instead. Aside from binary-to-decimal conversion issues possibly arising, this "solution" may also cause an OverflowException to be thrown if the Single value is too small or to large to fit the Decimal type.
Catching such errors to return the result from Math.Round(Double, Int32), should the conversion fail, does not strike me as the perfect solution. Nor does rewriting the application to use Decimal all the way.
Is there a more or less "correct" way to deal with this situation, and if so, what might it be?