This is more of a question out of interest. In my naivete, not knowing exactly how Decimal() is supposed to be used, I thought it would work fine when I did something like Decimal(120.24). However, see the following code:
>>> d = Decimal(120.24)
>>> d
Decimal('120.2399999999999948840923025272786617279052734375')
I changed how my code worked to avoid some problems that were coming up because of this, but my question lies in why this is happening at all. I did a little searching to try and find a straightforward answer, but most questions were asking how to work with the two types and not a more general "why" question.
So, my question is: What is happening behind the scenes that makes the decimal ever so slightly innacurate? Why doesn't it just come out as Decimal('120.24')?
In other words, why does Python itself treat floats different from Decimals? Why are floats "inaccurate"? In most other languages, as far as I know, floats are accurate to a specific point. Definitely more than 2 decimal places.
For reference, I'm using Python 3.6.
 
     
    