

Unsloth's Quantization-Aware Training (QAT) vs Post-Training Quantization (PTQ) for Small Models
Can a tiny LLM stay accurate under quantization thanks to QAT?

The Kaitchup – AI on a Budget
Weekly tutorials and news on adapting large language models (LLMs) to your tasks and hardware using the most recent techniques and models. The Kaitchup proposes a collection of 170+ AI notebooks regularly updated.
















