Is there any way to ensure that a floating-point variable entered by the user, having 2 decimal places after the decimal point, retains its exact value, rather than losing precision?
This is the example case:
I want to round a float with 50 numbers after radix point, like this
Before rounding =
0.70999997854232788085937500000000000000000000000000
to this:
After rounding =
0.71000000000000000000000000000000000000000000000000
I became confused when I wanted to compare a float number in a condition like this:
== Program Start==
Input : 0.71
/* program logic */
if (input == 0.71) {
    printf("True !");     
} else {
    printf("False !");
}
Output : False !
==Program End==
The output was False ! and will always be False ! because the true value of user's input is 0.70999997854232788085937500000000000000000000000000, and not 0.71000000000000000000000000000000000000000000000000
Is there any way to round a float value like that? I read about the potential for inaccuracies with floating point here:
- Why Are Floating Point Numbers Inaccurate?
- and followed it to these links: Is there a function to round a float in C or do I need to write my own?
- and Rounding Number to 2 Decimal Places in C
However, these don't answer my question. If I use ceilf(input * 100) / 100 function, it will make the input 0.71 into 0.71000 when printf()d using %.5f format - which seems to work. But when I print with %.50f, the real input value appears as 0.709999978542327880859375. So, I can't compare to that value in my condition.
So, if floating point can never be accurate, what is the trick for the program logic above to get a true return value at that condition?
 
     
     
     
    