I have this code
#define Third (1.0/3.0)
#define ThirdFloat (1.0f/3.0f)
int main()
{
    double a=1/3;
    double b=1.0/3.0;
    double c=1.0f/3.0f;
    printf("a = %20.15lf, b = %20.15lf, c = %20.15lf\n", a,b,c);
    float d=1/3;
    float e=1.0/3.0;
    float f=1.0f/3.0f;
    printf("d = %20.15f, e = %20.15f, f = %20.15f\n", d,e,f);
    double g=Third*3.0;
    double h=ThirdFloat*3.0;
    float i=ThirdFloat*3.0f;
    printf("(1/3)*3: g = %20.15lf; h = %20.15lf, i = %20.15f\n", g, h, i);
}
Which gives that output
a =    0.000000000000000, b =    0.333333333333333, c =    0.333333343267441
d =    0.000000000000000, e =    0.333333343267441, f =    0.333333343267441
(1/3)*3: g =    1.000000000000000; h =    1.000000029802322, i =    1.000000000000000
I assume that output for a and d looks like this because compiler casts integer value to float after division.
b looks good, e is wrong because of low float precision, so as c and f.
But i have no idea why g has correct value (i thought that 1.0/3.0 = 1.0lf/3.0lf, but then i should be wrong) and why h isn't the same as i.
 
     
    