I would like to create numpy.ndarray objects that hold complex integer values in them. NumPy does have complex support built-in, but for floating-point formats (float and double) only; I can create an ndarray with dtype='cfloat', for example, but there is no analogous dtype='cint16'. I would like to be able to create arrays that hold complex values represented using either 8- or 16-bit integers.
I found this mailing list post from 2007 where someone inquired about such support. The only workaround they recommended involved defining a new dtype that holds pairs of integers. This seems to represent each array element as a tuple of two values, but it's not clear what other work would need to be done in order to make the resulting data type work seamlessly with arithmetic functions.
I also considered another approach based on registration of user-defined types with NumPy. I don't have a problem with going to the C API to set this up if it will work well. However, the documentation for the type descriptor strucure seems to suggest that the type's kind field only supports signed/unsigned integer, floating-point, and complex floating-point numeric types. It's not clear that I would be able to get anywhere trying to define a complex integer type.
What are some recommendations for an approach that may work?
Whatever scheme I select, it must be amenable to wrapping of existing complex integer buffers without performing a copy. That is, I would like to be able to use PyArray_SimpleNewFromData() to expose the buffer to Python without having to make a copy of the buffer first. The buffer would be in interleaved real/imaginary format already, and would either be an array of int8_t or int16_t.