Introducing Higher-Order Functions

Learn about higher-order functions and how they can help in reducing boilerplate code.

Using different optimizers, loss, and models

We finished the previous chapter with an important question:

“Would the code inside the training loop change if we were using a different optimizer, loss, or even model?”

Below, you will find the commands that run the data generation, data preparation, and model configuration parts of our code:

Press + to interact
%run -i data_generation/simple_linear_regression.py
%run -i data_preparation/v0.py
%run -i model_configuration/v0.py

Next is the code for the training of the model:

Press + to interact
# Defines number of epochs
n_epochs = 1000
for epoch in range(n_epochs):
# Sets model to TRAIN mode
model.train()
# Step 1 - computes model's predicted output - forward pass
# No more manual prediction!
yhat = model(x_train_tensor)
# Step 2 - computes the loss
loss = loss_fn(yhat, y_train_tensor)
# Step 3 - computes gradients for both "b" and "w" parameters
loss.backward()
# Step 4 - updates parameters using gradients and
# the learning rate
optimizer.step()
optimizer.zero_grad()
print(loss)

Below, after running the code, you will get the parameter values of the linear model:

Press + to interact
# printing the parameter values of the Linear model
print(model.state_dict())

GPU users will get an output similar to the following:

...
Access this course and 1400+ top-rated courses and projects.