Evaluation

Learn about the model evaluation process.

How to evaluate the model

How can we evaluate the model? We can compute the validation loss, that is, how wrong the model’s predictions for unseen data are.

First, we need to use the model to compute predictions, and then use the loss function to compute the loss, given our predictions and the true labels. Sounds familiar? These are pretty much the first two steps of the training step function we have built as helper function #1.

So, we can use that code as a starting point, getting rid of its steps 3 and 4. And, most importantly, we need to use the model’s eval() method. The only thing it does is set the model to evaluation mode (just like its train() counterpart did), so the model can adjust its behavior accordingly when it has to perform some operations like Dropout.


“Why is setting the mode so important?”

Just like make_train_step (our new function), make_val_step is a higher-order function as well. Its code looks like this:

Press + to interact
def make_val_step(model, loss_fn):
# Builds function that performs a step
# in the validation loop
def perform_val_step(x, y):
# Sets model to EVAL mode
model.eval() # 1)
# Step 1 - Computes our model's predicted output
# forward pass
yhat = model(x)
# Step 2 - Computes the loss
loss = loss_fn(yhat, y)
# There is no need to compute Steps 3 and 4,
# since we don't update parameters during evaluation
return loss.item()
return perform_val_step
...
...