I wrote a python script as follows, to test the use of modulo 1 to extract the decimal part of a float x:
while( condition ):
        x *= 2
        print(x)
        decimal = x%1
        print(decimal)
this is a sample output:
x:  1.44
x%1:  0.43999999999999995
x:  2.88
x%1:  0.8799999999999999
....
....
Could someone please explain the reason for the loss of accuracy after applying modulo 1?  The 53 bits precision for a float are enough to represent 0.44. What operation (on IEEE 754 notation I assume) causes the loss of precision to 0.43999999999999995?
I am using python 3.6
It is clear such errors can be found in floating point math. But I wonder here if someone knows what operation triggered this precision loss. I.e. what happened to the initial IEEE 754 representation and why?
