Certain floating point numbers have inherent inaccuracy from binary floating point representation:
> puts "%.50f" % (0.5)  # cleanly representable
0.50000000000000000000000000000000000000000000000000
> puts "%.50f" % (0.1)  # not cleanly representable
0.10000000000000000555111512312578270211815834045410
This is nothing new.  But why does ruby's BigDecimal also show this behaviour?
> puts "%.50f" % ("0.1".to_d)
0.10000000000000000555111512312578270211815834045410
(I'm using the rails shorthand .to_d instead of BigDecimal.new for brevity only, this is not a rails specific question.)
Question:  Why is "0.1".to_d still showing errors on the order of 10-17?  I thought the purpose of BigDecimal was expressly to avoid inaccuracies like that?
At first I thought this was because I was converting an already inaccurate floating point 0.1 to BigDecimal, and BigDecimal was just losslessly representing the inaccuraccy.  But I made sure I was using the string constructor (as in the snippet above), which should avoid the problem.
EDIT:
A bit more investigation shows that BigDecimal does still internally represent things cleanly.  (Obvious, because otherwise this would be a huge bug in a very widely used system.)  Here's an example with an operation that would still show error:
> puts "%.50f" % ("0.1".to_d * "10".to_d)
1.00000000000000000000000000000000000000000000000000
If the representation were lossy, that would show the same error as above, just shifted by an order of magnitude. What is going on here?
 
     
    