I have a multidimensional array containing grayscale integer values which need to be normalized to the range 0-1. To be more precise, the multidimensional array in question is an array where every element contains matrix that represents a specific image, and each of those matrices (images) contains image's pixels with an integer value in the range 0-255.
Here is the normalization function:
def normalize(x, mmin=0.0, mmax=255.0):
x = (x - mmin )/(mmax - mmin + 10**(-5))
return x
RIGHT: When in main module I apply the function in this way:
trainingSet_Images = myUtils.normalize(trainingSet_Images)
The result is correctly an array of matrices with floating-point values.
WRONG: But applying normalize() function in this way:
for i in range(len(trainingSet_Images)):
trainingSet_Images[i] = myUtils.normalize(trainingSet_Images[i])
all elements of trainingSet_Images are a matrix of integers, with zero values.
It seems that Python remembers the original type of matrices - but why does the first way of doing the assignment work and the second way not?