In many programming languages operations like 0.1 + 0.2 won't equal 0.3, but rather 0.30000000000000004. And thus, checks like 0.1+0.2 == 0.3 will return false.
As far as I am concerned, this is due to IEE 754 standard and that's why it is common for many languages.
The same behaviour can be found in C#. I used next code snippet to test it:
static void Main(string[] args)
    {
        double x = 0.2;
        double y = 0.1;
        double res = 0.3;
        double z = x + y;
        Console.WriteLine("bool result = {0}", z == res); // outputs false
        Console.ReadLine();
    }
but if I am using the same code but with float variables everything works in other way:
static void Main(string[] args)
    {
        float x = 0.2f;
        float y = 0.1f;
        float res = 0.3f;
        float z = x + y;
        Console.WriteLine("bool result = {0}", z == res); // outputs true
        Console.ReadLine();
    }
Can anyone explain it to me?
