top of page

A Comprehensive Guide to Fine-Tuning Llama 2 7B on Custom Datasets

Updated: Feb 12

Fine tune Llama 7b

Introduction


In this blog post, we will explore the fine-tuning process for Llama 2 7B. Leveraging the capabilities of Hugging Face Transformers and TRL, we will explore two major techniques:

  1. Analysis of the Base Model with Prompts: Exploring the complexities of prompts and prompt templates, and their effect on the performance of the model.

  2. Optimizing Large Language Models through Fine-Tuning: Fine-tuning of the model with a focus on efficiency on a single GPU. Exploring techniques like LoRA and 4-bit quantization.


LLAMA


A collection of foundational language models, spanning from 7B to 65B parameters. The training regimen involves processing trillions of tokens, demonstrating the feasibility of achieving state-of-the-art models solely through the use of publicly accessible datasets, without reliance on proprietary or inaccessible data sources. Notably, LLaMA-13B exhibits superior performance compared to GPT-3 (175B) across a majority of benchmarks, while LLaMA-65B competes favourably with top-tier models such as Chinchilla70B and PaLM-540B.

LLaMA, an auto-regressive language model, is built on the transformer architecture. Like other prominent language models, LLaMA functions by taking a sequence of words as input and predicting the next word, recursively generating text.



Training: When a model is constructed from the ground up, it undergoes training. This process entails adjusting all the model's coefficients or weights to grasp patterns and relationships within the data.

Fine-Tuning: Fine-tuning assumes that the model has acquired a foundational understanding of language through training. This phase involves making targeted adjustments to tailor the model for a specific task or domain. Think of it as honing a well-educated model for a particular task.

Prompt Engineering: Prompt engineering revolves around the crafting of input prompts or questions to guide the LLM in generating desired outputs. It's about customizing the interaction with the model to elicit specific results.


Prompt Engineering:


Step 1: Load the Dataset

Let's load the alpaca dataset and use a sample of the dataset.


Loading the custom dataset

Dataset: The purpose of this dataset is to train a substantial language model to interpret instructions and generate code from natural language. Each entry in the dataset comprises:

  1. An instruction that describes a specific task.

  2. An input section, offering additional context when necessary for understanding the instruction.

  3. The anticipated output that corresponds to the given instruction.

Output

Input

Instruction

def print_2D_array(arr):\n for row in arr:\n ..

arr = [[1,2,3], [4,5,6], [7,8,9]]

Write a function in Python for printing a give..

SELECT name, city, country\nFROM stores;


Write an SQL statement to select the name, cit..

Step 2: Load the model

Let us load the model and tokenizer.



Step 3: Create a prompt and make predictions.

Let us create a prompt and check the generated outputs.



Output:

Output of the prompt method

The base llama model cannot comprehend instructions and provide relevant answers for our specific task; instead, it simply echoes the inputs we provided until reaching the token limit.


In this blog, we will focus on fine-tuning the model. There are different ways to fine-tune a llm.


Fine Tuning


Full Fine-Tuning:

  • Entails training the entire pre-trained model with new data.

  • Involves updating all model layers and parameters during the fine-tuning process.

  • While it can yield high accuracy, it demands substantial computational resources and time.

  • This presents a risk of catastrophic forgetting, where updating all weights may cause the algorithm to unintentionally lose knowledge acquired during pretraining. This can result in varied outcomes, ranging from increased error margins to complete erasure of specific task memories, leading to suboptimal performance.

  • Best suited for scenarios where the target task significantly differs from the original pre-training task.

Parameter Efficient Fine-Tuning (PEFT), e.g., LoRA:

  • Concentrates on updating only a subset of the model's parameters.

  • Frequently, this involves freezing specific layers or portions of the model to prevent catastrophic forgetting. Alternatively, additional trainable layers may be introduced while keeping the original model's weights frozen.

  • Can facilitate faster fine-tuning with fewer computational resources, though it may sacrifice some accuracy compared to full fine-tuning.

  • Encompasses methods like LoRA, AdaLoRA, and Adaption Prompt (LLaMA Adapter).

  • Ideal when the new task shares similarities with the original pre-training task.


Quantization-Based Fine-Tuning (QLoRA):

  • Involves reducing the precision of model parameters, such as converting 32-bit floating-point values to 8-bit or 4-bit integers.

  • Results in reduced CPU and GPU memory requirements by a factor of 4x with 8-bit integers or 8x with 4-bit integers.

  • However, this reduction in precision may lead to a loss in performance.

  • Can be advantageous for deploying models on resource-constrained devices like mobile phones or edge devices, as it reduces memory usage and enables faster inference on hardware with reduced precision support.


We will be using a Quantization-Based Fine-Tuning below. We will continue from the previous steps.


Step 4: Dataset preparation

We need to modify our dataset in a manner consistent with how the model was trained. Let's create a new field called text which is in the format "### Instruction: <instruction> ### Input: <input> ### Response: <output>".



Step 5: Prepare the model for training

We will create a Lora config and prepare the model for training using TrainingArguments.



Step 6: Model Training

We will now train and save the Lora adapter.



Step 7: Predictions

Let's now load the saved model and use it to make predictions on some test examples to see if fine-tuning the model improved the capability of the model for our task.


Output of fine tuned model

As we can see in the above image the model can generate much better outputs for our specific task.


Conclusion:

Even with a modest fine-tuning effort involving just 500 examples from our dataset with 5 minutes of training, we can see noticeable improvements in the model generations for our specific task.

This fine-tuning shows significant improvement compared to when we relied solely on prompts, as indicated by the absence of repetition and the correctness of the actual code aspects within the answers.

136 views0 comments
bottom of page