I want to evaluate a model and at the same time also capture the activations of the penultimate layer. I used this answer for a solution. I access the penultimate activations with pen_ulti_activs = layer_outs[-2].
But to double check whether that solution actually worked I put an assert in my code to verify that the activations from functor actually match the activations of model.predict by comparing the last layer activations returned from functor with the array returned from model.predict. The assert fails though. So I guess I am misunderstanding how the linked answer is intended to be used.
from keras import backend as K
def evaluate_model(model, test_gen):
inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functor = K.function([inp, K.learning_phase()], outputs ) # evaluation function
for inputs, targets in test_gen:
layer_outs = functor([inputs, 1.])
predictions = layer_outs[-1]
predictions_ = model.predict(inputs)
assert(np.allclose(predictions, predictions_))
So: Why are predictions and predictions_ not equal? Shouldn't model.predict return the same as the outputs of the last layer? After all mode.predict returns the outputs of the last layer.