I was messing around with storing floats and doubles using NSUserDefaults for use in an iPhone application, and I came across some inconsistencies in how the precision works with them, and how I understood it works.
This works exactly as I figured:
{
    NSString *key = @"OneLastKey";
    [PPrefs setFloat:235.1f forKey:key];
    GHAssertFalse([PPrefs getFloatForKey:key] == 235.1, @"");
    [PPrefs removeObjectForKey:key];
}
However, this one doesn't:
{
    NSString *key = @"SomeDoubleKey";
    [PPrefs setDouble:234.32 forKey:key];
    GHAssertEquals([PPrefs getDoubleForKey:key], 234.32, @"");
    [PPrefs removeObjectForKey:key];
}
This is the output GHUnit gives me:
'234.320007324' should be equal to '234.32'. 
But, if I first cast the double to a float, and then to a double it works without fail:
{
    NSString *key = @"SomeDoubleKey";
    [PPrefs setDouble:234.32 forKey:key];
    GHAssertEquals([PPrefs getDoubleForKey:key], (double)(float)234.32, @"");
    [PPrefs removeObjectForKey:key];
}
I was under the assumption that numbers entered without an 'f' at the end were already considered doubles. Is this incorrect? If so, why does casting to a float and then double work correctly?
