Definition
LoRA is a parameter-efficient fine-tuning method that adds small trainable layers to a frozen pre-trained model.
How It Works: - Freeze original model weights - Add small "adapter" matrices at specific layers - Only train the adapters (much smaller) - Merge adapters with base model for inference
- **Benefits:**
- Efficient: 10,000x fewer trainable parameters
- Fast: Hours instead of days to fine-tune
- Cheap: Can run on consumer hardware
- Modular: Swap different LoRAs easily
Common Uses: - Custom Stable Diffusion styles - Domain-specific LLM adaptation - Character/concept training - Language adaptation
Related Techniques: - QLoRA: Quantized LoRA for even less memory - DoRA: Weight-decomposed LoRA
Examples
Training a LoRA on anime images to generate anime-style art with Stable Diffusion.
Related Terms
Want more AI knowledge?
Get bite-sized AI concepts delivered to your inbox.
Free daily digest. No spam, unsubscribe anytime.