top of page

Accelerating Inference: Optimizing Performance for Large Language Models in Production Environments Part-1

Introduction


Optimizing LLM performance

Models like Llama and Falcon are predominantly very large and often addressed as large language models or LLMs. These models can easily have several billions of parameters. Deploying Large Language Models (LLMs) presents a significant challenge in real-world applications. As these models continue to advance, optimizing their performance becomes crucial for achieving efficient inference in production environments. This article aims to address this challenge by providing strategies, tips, and tricks to optimize infrastructure during deployment. Throughout the following sections, we will explore various factors influencing inference speed and efficiency, offering actionable insights to enhance the performance of LLMs in production settings.


Model Utilization Techniques


Prompt Engineering:

  • Zero-Shot Prompting: Consider the following prompt, "Classify the following review: # some review. sentiment:". This method of input along with instruction is called zero-shot prompting. While larger models may be able to classify the sentiment properly, the smaller models may struggle here.

  • One-shot Prompting: In this method, we pass a completed example along with the question, "Classify the following review: # some review. sentiment: Positive. Classify the following review: # some review. sentiment: ". The model now has a better chance of answering correctly.

  • Few-shot Prompting: An extension of one-shot, we pass multiple examples in the prompt instead of one example. This also leads to a larger prompt.


Fine-Tuning:

  • Full fine-tuning: We can improve the performance on our specific task by fine tuning the model. Fine-tuning these pretrained LLMs on downstream datasets results in huge performance gains when compared to using the pretrained LLMs out-of-the-box. However, full fine-tuning becomes infeasible to train on consumer hardware.

  • Parameter Efficient Fine Tuning (PEFT): PEFT approaches only fine-tune a small number of (extra) model parameters while freezing most parameters of the pretrained LLMs which leads to faster computations. There are three approaches in PEFT: selective, reparameterisation, and additive. One of the most used technique is the reparameterisation approach called low-rank adaptation (LoRA


Retrieval Augmented Generation (RAG):

Retrieval Augmented Generation(RAG) is the process of retrieving information from external knowledge sources with the help of LLM. Here, we retrieve the relevant information from external knowledge bases before answering questions with LLMs.


  • The process starts with the LLM computing embeddings of the user's query.

  • It then performs a similarity search on the query and external dataset, fine-tuning the prompt before generating a response.


Utilizing pre-trained models directly for inference tasks:

This approach is efficient for tasks where pre-trained models already exhibit desired performance. Direct inference minimizes computational time and simplifies deployment, but may not be suitable if our task is significantly different from what the model was trained on.



Strategies for Model Deployment


Real-time inference:

  • Facilitating immediate response generation, suitable for applications requiring instant feedback.

  • Ideal for scenarios such as chatbots, real-time translation, and voice assistants.

Serverless inference:

  • Ideal for intermittent traffic.

  • Eliminates the need for managing infrastructure, and scales automatically based on incoming traffic. Offers cost-effectiveness and scalability.

Batch transform:

  • Suitable for processing large volumes of data efficiently by performing inference in batches.

  • Suitable for tasks like batch processing of documents and bulk text generation.

Asynchronous inference:

  • Queues incoming requests and processes them asynchronously.

  • This option is ideal for requests with large payload sizes (up to 1GB), long processing times (up to one hour), and near real-time latency requirements.


Factors Affecting Inference Speed


Hardware Considerations:

  • The choice of hardware, including CPU, GPU, or TPU, significantly impacts inference speed.

  • Compute: Large Language Models demand substantial computational resources, with GPUs and TPUs offering more efficient handling than CPUs.

  • Cost: GPUs and TPUs deliver superior performance but at a higher cost compared to CPUs.

  • Parallelism: Due to their exceptional parallel processing capabilities, GPUs are commonly favoured for inference tasks, enabling significant acceleration of computations.

  • Memory: GPUs generally boast larger onboard memory capacities than CPUs, making them the preferred option for memory-intensive tasks like deep learning inference. Additionally, TPUs offer high memory bandwidth, ensuring efficient processing of large models and datasets.

Software Optimizations:


Quantization:

Quantization reduces the computational and memory expenses of inference by converting the network's weights and biases from their original floating-point format (e.g., 32-bit) to a lower precision format (e.g., 8-bit). This reduction in the number of bits minimizes memory usage and enables faster matrix multiplication operations through integer arithmetic, resulting in faster inference. Additionally, it facilitates the execution of models on embedded devices that may only support integer data types.


Pruning:


Pruning involves removing weights/neurons to reduce the model’s information while maintaining accuracy. This decreases the size of the LLM by getting rid of the weights that contribute minimally to the output. Removing these less important parameters leads to reduced model size and the number of computations needed during inference, leading to faster and more efficient performance. There are different strategies for pruning which are unstructured pruning, where weights with the smallest absolute values are removed, and structured pruning, which eliminates entire channels or filters based on their importance.


Continuous Batching:

Despite the parallel-compute architectures of GPUs, LLMs often struggle to leverage this potential due to memory bandwidth constraints. Batching emerges as a solution to this issue, allowing for the loading of model parameters once and processing multiple input sequences efficiently.


Static batching

The size of the batch remains constant until the inference is complete due to which GPU will be underutilized until the last sequence in the batch finishes generation.


Continuous batching

Another way is iteration-level scheduling, dynamically determining batch sizes per iteration instead of waiting for all sequences to complete generation. This approach enables the insertion of new sequences as soon as previous ones finish, leading to increased GPU utilization compared to static batching.


PagedAttention:

LLMs struggle with memory limitations during generation. In the decoding part of generation, all the attention keys and values generated for previous tokens are stored in GPU memory for reuse. This is called KV cache, and it may take up a large amount of memory for large models and long sequences.

PagedAttention attempts to optimize memory use by partitioning the KV cache into blocks that are accessed through a lookup table. Thus, the KV cache does not need to be stored in contiguous memory, and blocks are allocated as needed. The memory efficiency can increase GPU utilization on memory-bound workloads, so more inference batches can be supported. This improves throughput significantly


Model Architecture and Size:

Fewer Model Parameters:

Inference is also dependent on the size of the model. We can also use lighter models such as quantized models, TinyLlama which has 1 billion parameters or Microsoft phi2 (https://huggingface.co/microsoft/phi-2) which has around 2.7 billion parameters. This would also mean there will be a dip in the performance.


Knowledge Distillation


Knowledge distillation involves transferring knowledge from a large model to a smaller one while maintaining performance, making it a practical form of model compression. In this process, a small "student" model learns to mimic a large "teacher" model, leveraging the teacher's knowledge to achieve comparable performance.


Conclusion:

In this blog, we explored various LLM utilization methods, strategies for model deployment, and factors impacting inference speed. In the upcoming article, we'll delve into libraries integrated with LLMs, considerations for production deployment, and performance comparison tools/resources for different LLMs.


References:

23 views0 comments
bottom of page