🎓 All Courses | 📚 Hugging Face University Syllabus
Stickipedia University
📋 Study this course on TaskLoco

PEFT (Parameter-Efficient Fine-Tuning) and LoRA (Low-Rank Adaptation) enable fine-tuning large models on consumer hardware — without updating all billions of parameters.

How LoRA Works

Instead of updating all model weights, LoRA adds small low-rank matrices to key layers. Only these tiny matrices are trained — reducing trainable parameters by 99%+.

Quick LoRA Setup

from peft import get_peft_model, LoraConfig

config = LoraConfig(
    r=8,              # rank
    lora_alpha=32,
    target_modules=["q_proj", "v_proj"],
    lora_dropout=0.1,
)
model = get_peft_model(model, config)
model.print_trainable_parameters()
# trainable params: 4,194,304 || all params: 6,742,609,920 || trainable%: 0.06%

YouTube • Top 10
Hugging Face University: PEFT and LoRA — Fine-Tune Without a GPU Farm
Tap to Watch ›
📸
Google Images • Top 10
Hugging Face University: PEFT and LoRA — Fine-Tune Without a GPU Farm
Tap to View ›

Reference:

PEFT documentation

image for linkhttps://huggingface.co/docs/peft

📚 Hugging Face University — Full Course Syllabus
📋 Study this course on TaskLoco

TaskLoco™ — The Sticky Note GOAT