Fine-tuning large language models (LLMs) is straightforward with Hugging Face libraries for those familiar with Python.
However, several open-source frameworks now enable fine-tuning LLMs without coding.
Axolotl is one of them. It’s a tool designed to simplify the fine-tuning process, with extensive support for various configurations and architectures. With Axolotl, users only need to create a fine-tuning configuration file and provide it to the framework. Axolotl takes care of all other aspects, including managing package dependencies and dataset preprocessing.
In this article, I show how to use Axolotl for QLoRA/LoRA fine-tuning, using Llama 3 as an example. I focus on explaining the hyperparameters and interpreting the logs of an Axolotl fine-tuning.
The following notebook shows how to fine-tune Llama 3 on your computer using Axolotl and QLoRA: