...

/

Using CUDA in Python

Using CUDA in Python

Learn about using CUDA in Python.

📝 The code that involves CUDA will run on Google colab only. Have a look at Appendix 1 to learn the instructions for working in the colab workspace.

To use Google’s GPUs from the Python Notebook’s we need to change a setting. From the menu option at the top of our notebook choose Runtime and then Change Runtime type.

Keep the runtime type as Python 3, and change the hardware accelerator from None to GPU. This will restart the virtual machine inside Google’s platform, and attach a GPU.

Probably the simplest thing we can do is create a tensor that lives on the GPU. Up to this point, our tensors were stored in normal computer memory and accessed by the CPU.

At the beginning of this guide, we created a tensor like this.

Press + to interact
x = torch.tensor(3.5)
print(x)

That simple code didn’t specify which type of number to use for the tensor, so the default type float32 was used. You may know that NumPy has many options for data type, such as int32 for integer, float32 for floating point numbers, and even float64 for higher precision.

Creating tensors of type float

If we wanted to be more specific and not rely on default types, we would write the following.

Press + to interact
x = torch.FloatTensor([3.5])

To check the type we simply use x.type(). We can see the type is torch.FloatTensor.

Creating tensors on the GPU

...