I recently found this C code online.
#include <stdio.h>
int main() {
    double x = 3.1;
    float y = 3.1;
    if(x==y)
        printf("yes");
    else
        printf("No");
}
The output is No.
I added a few more printf calls to investigate
#include <stdio.h>
int main() {
    double x = 3.1;
    float y = 3.1;
    if(x==y)
        printf("yes");
    else
        printf("No");
    printf("%.10f\n", y);
    printf("%.10f\n", x);
    return 0;
}
The output came as
3.0999999046
3.1000000000
Why is it so? Why has the float variable lost its precision when printed to 10 decimal places?
