Optimizing and Training LLMs

Learn about fine-tuning LLMs, including what it is, how to do it, key parameters, and various techniques.

Now that you’ve seen the impressive capabilities of LLMs trained on vast datasets, you might wonder how to tailor them to work with the specific data you need. This is where fine-tuning comes into play, allowing you to adapt these models to your unique datasets and tasks.

What is fine-tuning?

Fine-tuning takes a pretrained language model and trains it further on a smaller, task-specific dataset. This helps the model adjust its parameters to better understand the nuances of the target domain, enhancing its performance for particular applications. Fine-tuning allows the model to retain its general language understanding while becoming more adept at sentiment analysis, summarization, or domain-specific question answering tasks.

Press + to interact
Fine-tuning LLMs
Fine-tuning LLMs

When do you need fine-tuning?

Let’s say you’re building a chatbot for a hospital to answer patients’ questions about medical procedures. While a pretrained language model understands general language, it might not know how to respond to specific questions like, What are the presurgery requirements for a knee replacement?

In this case, fine-tuning the model on a dataset of medical FAQs and hospital-specific information ensures it provides accurate and relevant answers.

How to fine-tune an LLM

Following are the key steps that we need to perform to fine-tune any LLM.

  1. Select a pretrained model: Choose a suitable pretrained LLM that aligns with your goals (e.g., GPT, BERT, etc.).

  2. Gather a task-specific dataset: Collect a dataset relevant to the specific task you want the model to excel at. This dataset should be smaller than the one used for pre-training but should contain examples that reflect the target use case.

  3. Prepare the data: Clean and format the dataset to ensure it is ready for training. This may include tokenization, normalization, and splitting the data into training and validation sets.

  4. Train the model: Use a training framework (such as PyTorch or TensorFlow) to fine-tune the model on your task-specific dataset. During this step, adjust hyperparameters such as learning rate and batch size to optimize performance.

  5. Evaluate and adjust: After training, evaluate the model on a validation set to assess its performance. Based on the results, you may need to adjust hyperparameters or continue training to improve outcomes.

WARNING: Sounds easy? But it’s not! fine-tuning an LLM may look like a straightforward process, but behind the scenes, it requires massive computation. Even though you’re working with a smaller, task-specific dataset, fine-tuning models like GPT can involve hundreds of thousands to millions of operations per second. It’s estimated that the full pretraining process can require trillions of calculations. So while you’re adjusting hyperparameters or formatting data, enormous computational power is doing the heavy lifting.

Parameters and LLMs

When fine-tuning an LLM, you're dealing with a model with billions of parameters—these are the building blocks of its language understanding. Fine-tuning involves adjusting parameters to tailor the model for your specific dataset or task. This process refines the model’s performance without the need to retrain it from the ground up, allowing it to adapt while still leveraging the knowledge it gained during pretraining. Some LLMs and their parameters are enlisted below (B means billion):

Model Name

Parameters

GPT-4o

Undisclosed

Google Gemini

10B-175B

Llama 3.1

8B-70B-405B

Claude 3.5 Sonnet

Undisclosed

Phi-2

2.7B

Mistral Large 2

123B

Gemma

2B-7B

OLMo

7B

Types of fine-tuning LLM techniques

Fine-tuning techniques can be categorized into several types, each offering different ways to adapt a pretrained LLM. Non-parametric fine-tuning, like in-context learning and retrieval-augmented generation (RAG), focuses on guiding the model with relevant examples or external knowledge without altering the model’s internal parameters. Parametric fine-tuning methods, such as full fine-tuning and parametric-efficient fine-tuning (PEFT), adjust the model's parameters to specialize in specific tasks. 

Fine-tuning is like personalizing your phone!

Just like how you customize your phone by setting your wallpaper, changing notification sounds, or choosing app preferences, fine-tuning an LLM is about personalizing it for a specific task. Whether it’s teaching it medical jargon or making it better at chatting about movies, fine-tuning tweaks the model to fit your needs, without changing its entire personality!

Curious about how to fine-tune an LLM from scratch? Look no further! Dive into: Fine-Tuning LLMs Using LoRA and QLoRA