I have noticed an interesting behavior with float rounding / truncation by the C# compiler. Namely, when a float literal is beyond the guaranteed representable range (7 decimal digits), then a) explicitly casting a float result to float (a semantically unnecessary operation) and b) storing intermediate calculation results in a local variable both change the output. An example:
using System;
class Program
{
    static void Main()
    {
        float f = 2.0499999f;
        var a = f * 100f;
        var b = (int) (f * 100f);
        var c = (int) (float) (f * 100f);
        var d = (int) a;
        var e = (int) (float) a;
        Console.WriteLine(a);
        Console.WriteLine(b);
        Console.WriteLine(c);
        Console.WriteLine(d);
        Console.WriteLine(e);
    }
}
The output is:
205
204
205
205
205
In the JITted debug build on my computer, b is calculated as follows:
          var b = (int) (f * 100f);
0000005a  fld         dword ptr [ebp-3Ch] 
0000005d  fmul        dword ptr ds:[035E1648h] 
00000063  fstp        qword ptr [ebp-5Ch] 
00000066  movsd       xmm0,mmword ptr [ebp-5Ch] 
0000006b  cvttsd2si   eax,xmm0 
0000006f  mov         dword ptr [ebp-44h],eax 
whereas d is calculated as
          var d = (int) a;
00000096  fld         dword ptr [ebp-40h] 
00000099  fstp        qword ptr [ebp-5Ch] 
0000009c  movsd       xmm0,mmword ptr [ebp-5Ch] 
000000a1  cvttsd2si   eax,xmm0 
000000a5  mov         dword ptr [ebp-4Ch],eax 
Finally, my question: why is the second line of the output different from the fourth? Does that extra fmul make such a difference? Also note that if the last (already unrepresentable) digit from the float f is removed or even reduced, everything "falls in place".
 
     
     
     
    