I followed the tutorial here in order to implement Logistic Regression using theano. The aforementioned tutorial uses SciPy's
fmin_cg optimisation procedure. Among the important argument to the aforementioned function are:
f the object/cost function to be minimised,
x0 a user supplied initial guess of the parameters,
fprime a function which provides the derivative of the function
callback an optional user-supplied function, called after each iteration.
The training function is defined as follows:
# creates a function that computes the average cost on the training set def train_fn(theta_value): classifier.theta.set_value(theta_value, borrow=True) train_losses = [batch_cost(i * batch_size) for i in xrange(n_train_batches)] return numpy.mean(train_losses)
What the above code does, is basically go through all the minibatches in the training dataset, for each minibatch calculate the average batch cost (i.e. the average of the cost function applied to each of the training samples in the minibatch) and averages the cost over all the batches. It might be worth pointing out that the cost for each individual batch is calculated by
batch_cost -- a theano function.
To me, it seems that the
callback function is being called arbitrarily, and not after every iteration as the documentation in SciPy claims.
Here is the output I received after modifying
callback by adding "train" and "callback" prints respectively.
... training the model train train train callback validation error 29.989583 % train callback validation error 24.437500 % train callback validation error 20.760417 % train callback validation error 16.937500 % train callback validation error 14.270833 % train callback validation error 14.156250 % train callback validation error 13.177083 % train callback validation error 12.270833 % train train callback validation error 11.697917 % train callback validation error 11.531250 %
My question is, since each call to
train_fn is indeed a training epoch, how do I change the behaviour, so that a call to
callback is invoked after