The Kaitchup – AI on a Budget
Subscribe
Sign in
Home
Notes
Start Here
AI Notebooks
The Kaitchup's Book
Weekly Kaitchup
Tutorials
The Kaitchup Index
Archive
About
Latest
Top
Discussions
Unsloth's Quantization-Aware Training (QAT) vs Post-Training Quantization (PTQ) for Small Models
Can a tiny LLM stay accurate under quantization thanks to QAT?
Nov 10
•
Benjamin Marie
8
2
BF16 vs FP16 for Reinforcement Learning: Where Are We?
The Weekly Kaitchup #117
Nov 7
•
Benjamin Marie
4
Advanced LoRA Fine-Tuning: How to Pick LoRA, QLoRA, DoRA, PiSSA, OLoRA, EVA, and LoftQ for LLMs
A practical guide to parameter-efficient LLM adaptation on 16-bit and 4-bit models
Nov 3
•
Benjamin Marie
12
3
1
October 2025
MiniMax M2 and Kimi-Linear: Why Full Attention Still Wins
The Weekly Kaitchup #116
Oct 31
•
Benjamin Marie
4
Generate Better Synthetic Datasets with a "User" LLM
User LLM + Qwen3 to generate fully synthetic dialogues
Oct 27
•
Benjamin Marie
10
1
The Weekly Kaitchup #115
Hi Everyone,
Oct 24
•
Benjamin Marie
5
1
Qwen3-VL Fine-Tuning on Your Computer
Model review, GPU requirements, and code explained step by step
Oct 20
•
Benjamin Marie
9
DGX Spark: Use It for Fine-Tuning
The Weekly Kaitchup #114
Oct 17
•
Benjamin Marie
8
1
1
Choosing a GGUF Model: K-Quants, I-Quants, and Legacy Formats
Reviewing the differences between each type and their impact on accuracy, throughput, and memory.
Oct 13
•
Benjamin Marie
7
Tiny Recursive Models for Very Specific Problems
The Weekly Kaitchup #113
Oct 11
•
Benjamin Marie
7
1
Why Increasing Batch Size Doesn’t Always Speed Up Training
5 most common issues that decreases the batch training efficiency
Oct 7
•
Benjamin Marie
9
1
LoRA Is Back
The Weekly Kaitchup #112
Oct 3
•
Benjamin Marie
9
2
1
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts