What is the reason that NaN's are considered less than -np.inf in any comparisons involving np.min or np.argmin?
import numpy as np
In [73]: m = np.array([np.nan, 1., 0., -np.inf])
In [74]: n = np.array([-np.inf, 1., 0., np.nan])
# Huh??
In [75]: np.min(m)
Out[75]: nan
In [76]: np.min(n)
Out[76]: nan
# Same for np.argmin
In [77]: np.argmin(m)
Out[77]: 0
In [78]: np.argmin(n)
Out[78]: 3
# Its all false!
In [79]: np.nan < -np.inf
Out[79]: False
In [80]: np.nan > -np.inf
Out[80]: False
# OK, that seems to fix it, but its not necessarily elegant
In [81]: np.nanmin(m)
Out[81]: -inf
In [82]: np.nanargmin(m)
Out[82]: 3
I would guess that its probably a side effect of any comparisons with NaN values returning False, however this imho leads to some rather annoying effects when you "happen" to sometimes end up with a NaN value in your array. The usage of np.nanmin or np.nanargmin some feels like a quickfix that was somehow stapled on top of the existing behaviour.
Apart from that note in the docs: "NaN values are propagated, that is if at least one item is NaN, the corresponding min value will be NaN as well. To ignore NaN values (MATLAB behavior), please use nanmin., I haven't found anything that explains the rationale behind that behaviour. Is this wanted or a side effect of a particular internal representation of NaN values? And why?