Here's a custom function that allows stepping through decimal increments:
def my_range(start, stop, step):
    i = start
    while i < stop:
        yield i
        i += step
It works like this:
out = list(my_range(0, 1, 0.1))
print(out)
[0, 0.1, 0.2, 0.30000000000000004, 0.4, 0.5, 0.6, 0.7, 0.7999999999999999, 0.8999999999999999, 0.9999999999999999]
Now, there's nothing surprising about this. It's understandable this happens because of floating point inaccuracies and that 0.1 has no exact representation in memory. So, those precision errors are understandable. 
Take numpy on the other hand:
import numpy as np
out = np.arange(0, 1, 0.1)
print(out)
array([ 0. ,  0.1,  0.2,  0.3,  0.4,  0.5,  0.6,  0.7,  0.8,  0.9]) 
What's interesting is that there are no visible imprecision accuracies introduced here. I thought this might have to do with what the __repr__ shows, so to confirm, I tried this:
x = list(my_range(0, 1.1, 0.1))[-1]
print(x.is_integer())
False
x = list(np.arange(0, 1.1, 0.1))[-1]
print(x.is_integer())
True
So, my function returns an incorrect upper value (it should be 1.0 but it is actually 1.0999999999999999), but np.arange does it correctly.
I'm aware of Is floating point math broken? but the point of this question is:
 
     
     
    