I have different types of data. most of them are int and sometimes float. The int is different in size so 8/ 16/ 32 bits are the sizes.
For this situation I'm creating a numerical type converter. therefore i check the type by using isinstence(). This because I have read that isinstance() is less worse than type().
The point is that a lot of data i get is numpy arrays. I use spyder as IDE and then i see by the variables also a type. but when i type isinstance(var,'type i read') i get False.  
I did some checks:
a = 2.17 
b = 3 
c = np.array(np.random.rand(2, 8))
d = np.array([1])
for there isinstance(var,type) i get:
isinstance(a, float)
True
isinstance(b, int)
True
isinstance(c, float)  # or isinstance(c, np.float64)
False
isinstance(d, int)  # or isinstance(c, np.int32)
False
c and d are True when i ask
isinstance(c, np.ndarray)
True
isinstance(d, np.ndarray)
True
i can check with step in the ndarray by 
isinstance(c[i][j], np.float64)
True
isinstance(d[i], np.int32)
True
but this means that for every dimension i have to add a new index otherwise it is False again.
I can check there type with dtype like c.dtype == 'float64'...
Oke so for what i have find and tried... My questions are basicly:
- how is the var.dtypemethod compared toisinstance()andtype()(worst/ better etc)?
- if var.dtypeis even worse asisinstance()is there some method in theisinstance()without all the manual indexing? (autoindexing etc)?
 
     
     
     
     
    