How to resolve Torch not compiled with CUDA enabled

PyTorch supports GPUGraphics processing unit, a specialized processor originally designed to accelerate graphics rendering.-accelerated computation. CUDA is a framework for parallel computing and a programming language that enables computationally intensive applications to run on GPUs for faster performance.

What causes the error

The  ''Torch not compiled with CUDA enabled'' error is likely to occur when the user does not have CUDA Toolkit installed in their Python environment.

print("Torch version:",torch.__version__)
print("Is CUDA enabled?",torch.cuda.is_available())

The code above shows us the output that Torch does not support CUDA and the torch.cuda.is_available() call returns False . We can resolve this error by installing CUDA Toolkit on our machine and upgrading the version of our current PyTorch library. It is essential to install the right version of PyTorch with CUDA support.

For instance, there are two versions of PyTorch:

  • CUDA support for CUDA 11.0.

  • CUDA support for CUDA 10.1.

Note: If your machine does not have an NVIDIA GPU, CUDA with PyTorch will not work.

Solutions

Uninstalling PyTorch

The first approach is to reinstall PyTorch with CUDA support. For this, we must delete the current version of PyTorch. To uninstall PyTorch, we will enter the following command in the terminal or command prompt:

pip uninstall torch

Terminal 1
Terminal
Loading...

Keep in mind that we might have to run the command twice to confirm uninstallation.

Note: Once we see the warning: skipping torch as it is not installed warning, we will know that we have completely uninstalled PyTorch.

Installing CUDA Toolkit

The next approach is to install the NVIDIA CUDA Toolkit before installing PyTorch with CUDA support.

To accomplish this, we need to check the compatibility of our GPU with CUDA before installing the CUDA Toolkit. This is to make sure that our GPU is compatible with CUDA. We can check the list of CUDA-compatible GPUs on the NVIDIA website.

Note: To learn more about CUDA installation on your machine, visit the CUDA official documentation..

We can also install CUDA Toolkit using a Python package manager like Miniconda on Linux. We start off by downloading the Miniconda installer script for Linux from the official conda website and running the following command:

bash Miniconda<version>.sh

We replace the <version> placeholder with the downloaded version of the script. Then, we create a conda environment by running the following command in the terminal:

conda create --name cuda_env

We then activate it using the command shown below.

conda activate cuda_env

The next step is to add NVIDIA channels to ensure a convenient way to access and install CUDA Toolkit. This allows us to easily find NVIDIA-specific packages without having to manually download and install them from separate sources.

conda config --append channels conda-forge
conda config --apend channels nvidia

Next, we install CUDA Toolkit and its dependencies by using the following command:

conda install cudatoolkit

We also verify the installation by running the command shown below.

nvcc --version

Installing PyTorch with CUDA support

We can install PyTorch with CUDA support by running the following commands in the terminal or command prompt:

pip install torch===1.5.0 torchvision===0.6.0 cudatoolkit=11.4.0
pip install torch torchvision cudatoolkit=<CUDA version>

We replace the `<CUDA version` placeholder with the installed version of CUDA.

Lastly, we need to verify that PyTorch has been installed with CUDA support.

import torch
print("Torch version:",torch.__version__)
print("Is CUDA enabled?",torch.cuda.is_available())

If the output to the torch.cuda.is_available() call is True, then we have successfully installed PyTorch with CUDA support.

Free Resources

Copyright ©2024 Educative, Inc. All rights reserved