I wish to use a loss layer of type InfogainLoss in my model. But I am having difficulties defining it properly.
Is there any tutorial/example on the usage of
INFOGAIN_LOSSlayer?Should the input to this layer, the class probabilities, be the output of a
SOFTMAXlayer, or is it enough to input the "top" of a fully connected layer?
INFOGAIN_LOSS requires three inputs: class probabilities, labels and the matrix H.
The matrix H can be provided either as a layer parameters infogain_loss_param { source: "fiename" }.
Suppose I have a python script that computes H as a numpy.array of shape (L,L) with dtype='f4' (where L is the number of labels in my model).
How can I convert my
numpy.arrayinto abinprotofile that can be provided as ainfogain_loss_param { source }to the model?Suppose I want
Hto be provided as the third input (bottom) to the loss layer (rather than as a model parameter). How can I do this?
Do I define a new data layer which "top" isH? If so, wouldn't the data of this layer be incremented every training iteration like the training data is incremented? How can I define multiple unrelated input "data" layers, and how does caffe know to read from the training/testing "data" layer batch after batch, while from theH"data" layer it knows to read only once for all the training process?
