Autograd

Learn about autograd in Pytorch.

Introduction to autograd

Autograd is PyTorch’s automatic differentiation package. Thanks to it, we do not need to worry about partial derivatives, chain rules, or anything like it.

The backward method

So, how do we tell PyTorch to do its thing and compute all gradients? That is the role of the backward() method. It will compute gradients for all (requiring gradient) tensors involved in the computation of a given variable.

Do you remember the starting point for computing the gradients? It was the loss, as we computed its partial derivatives w.r.t. our parameters. Hence, we need to invoke the backward() method from the corresponding Python variable: loss.backward().

Press + to interact
# Step 1 - Computes our model's predicted output - forward pass
yhat = b + w * x_train_tensor
# Step 2 - computes the loss
# We are using ALL data points, so this is BATCH gradient
# descent. How wrong is our model? That's the error!
error = (yhat - y_train_tensor)
# It is a regression, so it computes mean squared error (MSE)
loss = (error ** 2).mean()
# Step 3 - computes gradients for both "b" and "w" parameters
# No more manual computation of gradients!
# b_grad = 2 * error.mean()
# w_grad = 2 * (x_tensor * error).mean()
loss.backward() # 1)
...

The following refer to the steps (old and new) of the process occurrin ...

Access this course and 1400+ top-rated courses and projects.