I would like to know what is the best way to overcome the unexpected behavior of numpy.arange() when floats are involved. In particular, I have identified 2 problems in the following example:
I need to discretize my time domain from 0 up until t_max (inclusive), with time steps dt. For that I define these two variables and redefine dt so the time vector is uniformly space, as follows.
t_max = 200
dt = 0.07
dt = t_max/np.ceil(t_max/dt) # dt = 0.06997900629811056
t = np.arange(0, t_max+dt, dt, dtype=float) # so to have t = [0, 0.0699..., ..., 200]
The 1st problem is that t[-1] = 199.99999999999997, because t_max/dt = 2858.0000000000005 (dt redefined), and not 2858.0 as expected.
The 2nd problem is that related to the use of floats, in this case dt. I usually use the suggestion in np.arange does not work as expected with floating point arguments. I arbitraly chose 0.5 because it is half-way through not getting t_max (that I want) and getting t_max+dt (that I don't want), but this seems a bit odd.
t = np.arange(0, t_max/dt+0.5, 1, dtype=float)*dt
In Matlab all this seems to work with a simple t = 0:dt:t_max;.
In my case, this is an issue because logical tests like in the case below will fail when I expect them to return true. Most of the times it is handy to work with x_max, but the real figure is x[-1] (as all the code is based on my time domain).
x_max = v*t_max
x = v*t
x_max == x[-1] # False
I refer to Python: Range or numpy Arange with end limit include, where a good alternative is given. However, the function provided does not handle my dt = 0.06997900629811056 (it just gets 0.07), so I am a bit concerned about using it.