AI Notebooks

Learn everything you need to know about LLMs. The most advanced techniques are covered for:

  • Supervised fine-tuning (TRL, Unsloth, …)

  • Quantization (GGUF, AWQ, GPTQ, AutoRound, Bitsandbytes, …)

  • Efficient inference and serving (vLLM, Transformers, llama.cpp, Ollama, …)

  • Reinforcement learning and preference optimization (GRPO, PPO, DPO, ORPO, …)

  • Dataset generation

  • RAG

All applied to state-of-the-art LLMs: Llama 2/3/3.1/3.2, Gemma 2/3/3n, Mistral, Mixtral, Phi, Yi, Falcon, Qwen-VL, Qwen 2/2.5/3, Minitron, DeepSeek models, and others.

There are over 170 notebooks, with 2 new ones added every week.

They run on Google Colab, and you can copy, share, or download them as Jupyter notebooks.

If a notebook breaks, DM me on Substack and I’ll help.

The notebooks:

This post is for paid subscribers