The below program yields value = 0.000000, although the format specifier is that of a float.
#include <stdio.h>
int main ()
{
printf ("%f", 5/9);
return 0;
}
The below program yields value = 0.000000, although the format specifier is that of a float.
#include <stdio.h>
int main ()
{
printf ("%f", 5/9);
return 0;
}
The expression 5/9 has two integer arguments and so is evaluated used integer division. Hence the result is 0.
You then invoke undefined behaviour by passing an int to a %f format specifier.
Change at least one of the operands to a floating point value to use floating point division:
printf("%f", 5.0/9.0);
printf ("%f", 5/9);
5 / 9 is an integer division. The expression yields an int but f requires a float or a double argument.
Change the call to:
printf ("%f", 5 / 9.0);
to perform a floating-point division and have a double argument.
5 and 9 is integer. Division of two integer will be always integer. Make one of them a float or a double
printf ("%f", 5.0/9);
or
printf ("%f", 5/9.0);