I've noticed something very odd when working with addition of nullable floats. Take the following code:
float? a = 2.1f;
float? b = 3.8f;
float? c = 0.2f;
float? result = 
(a == null ? 0 : a)
+ (b == null ? 0 : b)
+ (c == null ? 0 : c);
float? result2 = 
(a == null ? 0 : a.Value)
+ (b == null ? 0 : b.Value)
+ (c == null ? 0 : c.Value);
result is 6.099999 whereas result2 is 6.1. I'm lucky to have stumbled on this at all because if I change the values for a, b, and c the behavior typically appears correct. This may also happen with other arithmetic operators or other nullable value types, but this is case I've been able to reproduce. What I don't understand is why the implicit cast to float from float? didn't work correctly in the first case. I could perhaps understand if it tried to get an int value given that the other side of the conditional is 0, but that doesn't appear to be what's happening. Given that result only appears incorrect for certain combinations of floating values, I'm assuming this is some kind of rounding problem with multiple conversions (possibly due to boxing/unboxing or something).
Any ideas?
 
    