The Kaitchup – AI on a Budget
Subscribe
Sign in
Home
Notes
AI Notebooks
The Kaitchup's Book
Weekly Kaitchup
Tutorials
Archive
About
Latest
Top
Discussions
RTX 6000 Pro vs H100 & A100: Best Single-GPU Choice for Fast, Low-Cost LLM Fine-Tuning
Faster, cheaper single-GPU training
Jun 16
•
Benjamin Marie
6
Share this post
The Kaitchup – AI on a Budget
RTX 6000 Pro vs H100 & A100: Best Single-GPU Choice for Fast, Low-Cost LLM Fine-Tuning
Copy link
Facebook
Email
Notes
More
The Weekly Kaitchup #96
Magistral - Text-to-LoRA
Jun 13
•
Benjamin Marie
3
Share this post
The Kaitchup – AI on a Budget
The Weekly Kaitchup #96
Copy link
Facebook
Email
Notes
More
1
Fine-Tuning 2-Bit Qwen3 Models on Your Computer
Code and best practices
Jun 9
•
Benjamin Marie
8
Share this post
The Kaitchup – AI on a Budget
Fine-Tuning 2-Bit Qwen3 Models on Your Computer
Copy link
Facebook
Email
Notes
More
The Weekly Kaitchup #95
Qwen3 Embeddings/Reranker - Packing Improved - Unsloth's Notebooks - SGLang vs. vLLM
Jun 6
•
Benjamin Marie
5
Share this post
The Kaitchup – AI on a Budget
The Weekly Kaitchup #95
Copy link
Facebook
Email
Notes
More
Qwulu 3: Fine-Tuning Qwen3 Base with LoRA and TULU 3's Supervised Fine-Tuning Recipe
Can a supervised fine-tuning recipe that works effectively on Llama 3.1 be applied directly to Qwen3?
Jun 5
•
Benjamin Marie
5
Share this post
The Kaitchup – AI on a Budget
Qwulu 3: Fine-Tuning Qwen3 Base with LoRA and TULU 3's Supervised Fine-Tuning Recipe
Copy link
Facebook
Email
Notes
More
Running DeepSeek-R1-0528 with a Single 24 GB GPU
Is it worth it?
Jun 2
•
Benjamin Marie
5
Share this post
The Kaitchup – AI on a Budget
Running DeepSeek-R1-0528 with a Single 24 GB GPU
Copy link
Facebook
Email
Notes
More
May 2025
The Weekly Kaitchup #94
RL with Spurious Rewards? - Quantization and Long Contexts
May 30
•
Benjamin Marie
7
Share this post
The Kaitchup – AI on a Budget
The Weekly Kaitchup #94
Copy link
Facebook
Email
Notes
More
Padding-Free vs. Packing: Fast and Efficient Fine-Tuning for LLMs Explained
Padding-free is faster and without cross-contamination
May 28
•
Benjamin Marie
5
Share this post
The Kaitchup – AI on a Budget
Padding-Free vs. Packing: Fast and Efficient Fine-Tuning for LLMs Explained
Copy link
Facebook
Email
Notes
More
7
Qwen3-30B-A3B vs Qwen3-32B: Is the MoE Model Really Worth It?
Qwen3 MoE is a good choice, but don't quantize it
May 26
•
Benjamin Marie
3
Share this post
The Kaitchup – AI on a Budget
Qwen3-30B-A3B vs Qwen3-32B: Is the MoE Model Really Worth It?
Copy link
Facebook
Email
Notes
More
4
The Weekly Kaitchup #93
Gemma 3n - Devstral - Nemotron Nano 4B
May 23
•
Benjamin Marie
6
Share this post
The Kaitchup – AI on a Budget
The Weekly Kaitchup #93
Copy link
Facebook
Email
Notes
More
Qwen3: When <|im_end|> Suddenly Becomes <|endoftext|>
How to waste a Wednesday
May 22
•
Benjamin Marie
7
Share this post
The Kaitchup – AI on a Budget
Qwen3: When <|im_end|> Suddenly Becomes <|endoftext|>
Copy link
Facebook
Email
Notes
More
1
Boost 2-Bit LLM Accuracy with EoRA
A training-free solution for extreme LLM compression
May 19
•
Benjamin Marie
4
Share this post
The Kaitchup – AI on a Budget
Boost 2-Bit LLM Accuracy with EoRA
Copy link
Facebook
Email
Notes
More
Share
Copy link
Facebook
Email
Notes
More
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts