I am looking at the TensorFlow "MNIST For ML Beginners" tutorial, and I want to print out the training loss after every training step.
My training loop currently looks like this:
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
Now, train_step is defined as:
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
Where cross_entropy is the loss which I want to print out:
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
One way to print this would be to explicitly compute cross_entropy in the training loop:
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
print 'loss = ' + str(cross_entropy)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
I now have two questions regarding this:
Given that
cross_entropyis already computed duringsess.run(train_step, ...), it seems inefficient to compute it twice, requiring twice the number of forward passes of all the training data. Is there a way to access the value ofcross_entropywhen it was computed duringsess.run(train_step, ...)?How do I even print a
tf.Variable? Usingstr(cross_entropy)gives me an error...
Thank you!