Loading Data, Devices, and CUDA
Explore converting Numpy arrays into PyTorch tensors while managing data on CPU and GPU devices. Understand how to check GPU availability with CUDA, transfer tensors between devices, and avoid common conversion errors to prepare your data effectively for training deep learning models.
Conversions between Numpy and PyTorch
It is time to start converting our Numpy code to PyTorch. We will start with the training data; that is, our x_train and y_train arrays.
The “as_tensor” method
”How do we go from Numpy’s arrays to PyTorch’s tensors?”
The as_tensor method preserves the type of the array, which can also be seen in the code below:
You can also easily cast it to a different type like a lower precision (32-bit) float, which will occupy less space in memory using the float() method:
IMPORTANT: Both
as_tensor()andfrom_numpy()return a tensor that shares the underlying data with the original Numpy array. Similar to what happened when we usedview()in the previous lesson, if you modify the original Numpy array, you are modifying the corresponding PyTorch tensor too and vice-versa. ...