The Kaitchup – AI on a Budget

The Kaitchup – AI on a Budget

Share this post

The Kaitchup – AI on a Budget
The Kaitchup – AI on a Budget
Simple QLoRA Fine-tuning with Axolotl
Copy link
Facebook
Email
Notes
More

Simple QLoRA Fine-tuning with Axolotl

Streamlining fine-tuning

Benjamin Marie's avatar
Benjamin Marie
Jun 24, 2024
∙ Paid
3

Share this post

The Kaitchup – AI on a Budget
The Kaitchup – AI on a Budget
Simple QLoRA Fine-tuning with Axolotl
Copy link
Facebook
Email
Notes
More
8
Share
Generated with DALL-E

Fine-tuning large language models (LLMs) is straightforward with Hugging Face libraries for those familiar with Python.

Fine-tune Phi-3 Medium on Your Computer

Fine-tune Phi-3 Medium on Your Computer

Benjamin Marie
·
June 3, 2024
Read full story

However, several open-source frameworks now enable fine-tuning LLMs without coding.

Axolotl is one of them. It’s a tool designed to simplify the fine-tuning process, with extensive support for various configurations and architectures. With Axolotl, users only need to create a fine-tuning configuration file and provide it to the framework. Axolotl takes care of all other aspects, including managing package dependencies and dataset preprocessing.

The Kaitchup – AI on a Budget is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

In this article, I show how to use Axolotl for QLoRA/LoRA fine-tuning, using Llama 3 as an example. I focus on explaining the hyperparameters and interpreting the logs of an Axolotl fine-tuning.

The following notebook shows how to fine-tune Llama 3 on your computer using Axolotl and QLoRA:

Get the notebook (#81)

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 The Kaitchup
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More