Finding good training hyperparameters for new LLMs is always difficult and time-consuming. With Zephyr Gemma 7B, Hugging Face seems to have found a good recipe for fine-tuning Gemma. They used a combination of distilled supervised fine-tuning and DPO similar to what they did for their original Zephyr based on Mistral 7B:
We also know now that are there several bugs in the Pytorch version of Gemma initially released on the Hugging Face Hub. These bugs impact the precision and performance of the model during training. They are currently under correction.
Unsloth, which is a framework for fast and memory-efficient fine-tuning, has already implemented several patches improving Gemma’s stability during fine-tuning.
In this article, I first review the recipe used by Hugging Face to train Zephyr Gemma 7B. Then, I show how to use this recipe with Unsloth. We will see how fast and memory-efficient Unsloth is with Gemma, with a peak memory consumption of 19 GB of VRAM and a total training time of only 8 hours.
The notebook implementing the fine-tuning and DPO training of Gemma 7B with unsloth is available here: