vLLM: Serve Fast Mistral 7B and Llama 2 Models from Your Computer
Offline inference and serving with quantized models
vLLM is one the fastest frameworks that you can find for serving large language models (LLMs). It implements many inference optimizations, including custom CUDA kernels and pagedAttention, and supports various model architectures, such as Falcon, Llama 2, Mistral 7B, Qwen, and more. These models can be served quantized and with LoRA adapters.
In this article, I present vLLM and demonstrate how to serve Mistral 7B and Llama 2, quantized with AWQ and SqueezeLLM, from your computer. I show how to do it offline and with a vLLM local server running in the background. Note that, while I use Mistral 7B and Llama 2 7B in this article, it would work the same for the other LLMs supported by vLLM.
You can replicate my experiments by running this notebook.
Note: I demonstrate how to use vLLM using an NVIDIA GPU, but vLLM also supports AMD GPUs with ROCm.
Paging Attention for Faster Inference with vLLM
Back in June 2023, I first wrote about vLLM in this article: