The Kaitchup – AI on a Budget

The Kaitchup – AI on a Budget

Share this post

The Kaitchup – AI on a Budget
The Kaitchup – AI on a Budget
GaLore: Full Fine-tuning on Your GPU
Copy link
Facebook
Email
Notes
More

GaLore: Full Fine-tuning on Your GPU

And pre-training!

Benjamin Marie's avatar
Benjamin Marie
Apr 04, 2024
∙ Paid
9

Share this post

The Kaitchup – AI on a Budget
The Kaitchup – AI on a Budget
GaLore: Full Fine-tuning on Your GPU
Copy link
Facebook
Email
Notes
More
9
2
Share
Generate with DALL-E

Fine-tuning large language models requires a huge amount of GPU memory. We need a GPU with enough RAM to load the model and store the optimizer states. If we take a 7 billion parameter model, e.g., a Mistral 7B, we need 14 GB of GPU memory to load the model, assuming that the model is loaded with 16-bit parameters. Moreover, for each parameter, the standard AdamW optimizer creates and stores 2 new 32-bit parameters, i.e., we need an additional 56 GB of memory. That’s already a total of 70 GB!

This is without counting the additional copies of the model made at various stages of fine-tuning and the model’s activations whose memory consumption is also significant but difficult to estimate as it depends on hyperparameters. One A100/H100 80 GB GPU wouldn’t be enough to fully fine-tune a 7B model.

Storing the optimizer states is by far the most expensive part. Even if we quantize the optimizer’s parameters to 8-bit, they would still require 14 GB of memory.

GaLore can significantly reduce this memory consumption to a point where it becomes possible to perform full fine-tuning of a 7B model on a 24 GB consumer GPU. Moreover, while parameter-efficient fine-tuning (PEFT) methods such as LoRA often don’t perform as well as full fine-tuning, GaLore performs comparably to full fine-tuning.

The Kaitchup – AI on a Budget is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

In this article, I present GaLore. We will see how it works and why it is more memory-efficient. Then, I show how to use it to fully fine-tune Mistral 7B on consumer hardware. This tutorial can also be used to fully fine-tune most LLMs supported by Hugging Face Transformers.

The notebook implementing GaLore’s full fine-tuning for Mistral 7B is available here:

Get the notebook (#57)

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 The Kaitchup
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More