Low-Rank Adaptation (LoRA)

Learn about the Low-Rank Adaptation technique, how it works during fine-tuning the model, and the different parameters required to implement it.

With rapid development in the field of generative AI, LLMs are becoming an integral part of everyone’s daily life and business operations. They are transforming the way we interact, work, and innovate. Besides their impressive capabilities, these models often require to be trained for specific tasks, datasets, or domains to achieve optimal performance. Fine-tuning helps us train the model on specific tasks and datasets, unlocking their full potential, but it’s a computationally expensive and time-consuming task to perform. As we push the boundaries of AI, there comes a need to develop efficient and cost-effective techniques for fine-tuning, which maximize the model’s performance.

In 2023, Hugging Face officially released parameter-efficient fine-tuning (PEFT), an approach that trains the model with a small number of parameters without compromising its performance. PEFT is implemented by various techniques, one of which is Low-Rank Adaptation (LoRA), which is an effective way to fine-tune LLMs, balancing efficiency and adaptability. Let’s dive into the details of LoRA and see how it works.

What is LoRA?

LoRA is a technique that works by adding reparameterized weights to a pretrained model without modifying the original weights. It uses a low-rank transformation technique that reparameterizes the model’s weights using two low-rank matrices. Only these low-rank matrices are trained to adapt the model to a new task, efficiently reducing the number of trainable parameters. This approach significantly decreases computational expenses and training time, making LoRA an attractive solution for efficient fine-tuning.

How does LoRA work?

LoRA works by decomposing the weight matrices of the pretrained model into smaller, lower-rank matrices that approximate the larger ones. These new matrices are injected into each layer of the transformer and trained to adapt to the dataset. The original weights of the pretrained model are frozen to avoid catastrophic forgetting. Updated weights are then combined with the original ones to produce the results.

Get hands-on with 1400+ tech skills courses.