Hands-On QLoRA

Learn how to fine-tune LLM on a custom dataset using QLoRA.

Let’s fine-tune Meta’s Llama 3.1 model on openai/gsm8k dataset using QLoRA.

Install the dependencies

First, let’s install the libraries required for fine-tuning. We'll be install the latest versions (at the time of writing) of the libraries.

pip3 install transformers==4.44.1
pip3 install accelerate
pip3 install bitsandbytes==0.43.3
pip3 install datasets==2.21.0
pip3 install trl==0.9.6
pip3 install peft==0.12.0
!pip install -U "huggingface_hub[cli]"
Install the dependencies
  • Line 1: We install the transformers library, which is a Hugging Face library that provides APIs and tools to download and train state-of-the-art pretrained models.

  • Line 2: We install the accelerate library, which is designed to facilitate training deep learning models across different hardware. It enables the training and inference to be simple, efficient, and adaptable.

  • Line 3: We install the bitsandbytes library, which is a transformers library that helps with the quantization of the model.

  • Line 4: We install the dataset library for sharing and accessing datasets for downstream tasks.

  • Line 5: We install the trl library for training transformer models with reinforcement learning and supervised fine-tuning.

  • Line 6: We install the peft library for parametric efficient fine-tuning of large language models for downstream tasks.

  • Line 7: We install the Hugging Face CLI to log in and access the model and dataset from Hugging Face.

Hugging Face CLI

After installing the required libraries, it’s time to log in to the Hugging Face CLI. Hugging Face requires this step to access any model and dataset from ...

Access this course and 1400+ top-rated courses and projects.