This a exercise from Udacity's Deep learning course.
Can anyone explain why the final answer is not 1.0?
v1 = 1e9
v2 = 1e-6
for i in range(int(1e6)):
    v1 = v1 + v2
print 'answer is', v1 - 1e9
# answer is 0.953674316406
This a exercise from Udacity's Deep learning course.
Can anyone explain why the final answer is not 1.0?
v1 = 1e9
v2 = 1e-6
for i in range(int(1e6)):
    v1 = v1 + v2
print 'answer is', v1 - 1e9
# answer is 0.953674316406
 
    
    Because 1e-6 cannot be represented exactly as floating point value:
print("{:.75f}".format(1e-6))
'0.000000999999999999999954748111825886258685613938723690807819366455078125000'
If you use a number that can be represented exactly such as v2 = 1.0/(2**20) and change iteration count to 2**20 you will get 0. However as @user2357112 pointed out even this property holds only if all of the intermediate results can be represented exactly using floating point value.
Check Python tutorial for more details: https://docs.python.org/3/tutorial/floatingpoint.html
 
    
    Let's check what v1 sees of v2 in the floating point addition:
>>> v3 = (v1+v2)-v1
>>> print "%.25f %.25f" % (v3,1e6*v3)
0.0000009536743164062500000 0.9536743164062500000000000
What happens is that all but the leading 1 get shifted out of the binary mantissa of 1e-6 while equalizing the exponent with 1e9. This means that the final value is 10**6 * 2**(-20) = (1.024)**(-2), which gives exactly the observed value
>>> print "%.17f" % (1.024)**(-2)
0.95367431640625000
