LoRA at Scale on a Consumer GPU: Does It Work?
Reproducing TULU 3 SFT on Consumer Hardware Using LoRA and Unsloth
LoRA is well known for drastically cutting the cost of supervised fine-tuning (SFT), and many tutorials demonstrate how to get started. However, most of these focus on narrow tasks, small datasets, or lightweight demos. What they don’t address is the more important question for real-world use cases: Can LoRA match the performance of full fine-tuning on a large-scale dataset, while costing 10 times less?
That’s what we’ll explore in this article. And spoiler: the answer is (almost) yes. With LoRA and tools like Unsloth, it’s possible to replicate TULU 3’s state-of-the-art SFT recipe using just a single 24 GB GPU (e.g., an RTX 4090), while the original full fine-tuning setup from AI2 required multiple GPU nodes and several hours of compute. We’ll walk through how to reproduce their results, yielding a high-quality Llama 3.1 chat model, using a far more accessible setup.
This is just the beginning: in a follow-up article, we’ll also test whether the same approach transfers well to other models, like Qwen3, or if the current recipe is uniquely tuned to Llama 3.1.
My SFT recipe using LoRA, Unsloth, and a single 24 GB GPU, can be tried with this notebook: