Run Mixtral-8x7B on Consumer Hardware with Expert Offloading
Finding the right trade-off between memory usage and inference speed
While Mixtral-8x7B is one of the best open large language models (LLM), it is also a huge model with 46.7B parameters. Even when quantized to 4-bit, the model can’t be fully loaded on a consumer GPU (e.g., an RTX 3090 with 24 GB of VRAM is not enough).
Mixtral-8x7B is a mixture of experts (MoE). It is made of 8 expert sub-networks of 6 billion parameters each. I explained in more detail how the model works in this article:
Since only 2 experts among 8 are effective during decoding, the 6 remaining experts can be moved, or offloaded, to another device, e.g., the CPU RAM, to free up some of the GPU VRAM. In practice, this offloading is complicated.
Choosing which one of the experts to activate is a decision taken at inference time for each input token and each layer of the model. Naively moving some parts of the model to the CPU RAM, as with Accelerate’s device_map, would create a communication bottleneck between the CPU and the GPU.
Mixtral-offloading (MIT license) is a project that proposes a much more efficient solution to reduce VRAM consumption while preserving a reasonable inference speed.
In this article, I explain how mixtral-offloading implements expert-aware quantization and expert offloading to save memory and maintain a good inference speed. Using this framework, we will see how to run Mixtral-8x7B on consumer hardware and benchmark its inference speed.
The tutorial section is also available as a notebook that you can find here:
Caching & Speculative Offloading
MoE language models often allocate distinct experts to sub-tasks, but not consistently across long token sequences. Some experts are active in short 2-4 token sequences, while others have intermittent "gaps" in their activation. This is well illustrated by the following figure:
To capitalize on this pattern, the authors of mixtral-offloading suggest keeping active experts in GPU memory as a "cache" for future tokens. This ensures quick availability if the same experts are needed again. GPU memory limits the number of stored experts, and a simple Least Recently Used (LRU) cache is employed, maintaining the k least recently used experts uniformly across all layers.
Despite its simplicity, the LRU cache strategy significantly speeds up inference for MoE models like Mixtral-8x7B.
However, while LRU caching improves the average expert loading time, a significant portion of inference time still involves waiting for the next expert to load. MoE offloading lacks effective overlap between expert loading and computation.
In standard (non-MoE) models, efficient offloading schedules involve pre-loading the next layer while the previous one runs. However, this advantage isn't feasible for MoE models, as experts are selected just in time for computation. The system can't pre-fetch the next layer until it determines which experts to load. Despite the inability to reliably pre-fetch, the authors found that speculative loading can be used to guess the next experts while processing the previous layer, accelerating the next layer's inference if the guess is correct.
To sum up, an LRU cache and speculative offloading can save VRAM while keeping inference efficient by offloading the experts that are the less likely to be used.
Expert-Aware Aggressive Quantization
In addition to expert offloading, we need to quantize the model to make it run on consumer hardware. Naive 4-bit quantization with bitsandbytes’ NF4 reduces the size of the model to 23.5 GB. This is not enough if we assume that a consumer-grade GPU has at most 24 GB of VRAM.
Previous studies showed that experts in MoE can be aggressively quantized to lower precision without much impact on the model performance. For instance, the authors of mixtral-offloading mentioned in their technical report that they have tried 1-bit quantization methods such as the ones proposed by QMoE but observed a significant drop in performance.
Instead, they applied a mixed-precision quantization keeping the non-experts’ parameters to 4-bit.
Among the 46.7 billion parameters in the Mixtral-8x7B, 96.6% (45.1 billion) are for the experts, while the remainder is allocated to token embeddings, self-attention layers, MoE gates, and other minor layers such as LayerNorm.
For quantization, they chose to apply Half Quadratic Quantization (HQQ) (Badri & Shaji, 2023), a data-free algorithm accommodating various bit rates.
They have tried various quantization configurations: FP16 (no quantization), HQQ 4-bit with group size 64 and scale group size 256, HQQ 3-bit with group size 64 and scale group size 128, and HQQ 2-bit with group size 16 and scale group size 128.
As shown in the following table, it appears advantageous to quantize experts to 3 or 2 bits while maintaining attention layers at a higher bitwidth (16 or 4 bits).
After applying quantization and expert offloading, inference is between 2 and 3 times faster than with the offloading implemented by Accelerate (device_map):