Gemma 3-1B — Quick Reference
Project Dependencies
To achieve efficient fine-tuning on consumer hardware, this project relies on a specific set of optimized libraries. The dependencies are chosen to balance performance (speed/memory) with ease of use (Hugging Face integration).
Key Libraries
- unsloth: Accelerates training and reduces VRAM usage
- torch (PyTorch): Core deep learning framework
- transformers: Hugging Face model loading/manipulation
- peft: Enables LoRA parameter-efficient fine-tuning
- trl: SFTTrainer for supervised fine-tuning
- bitsandbytes: 4/8-bit quantization
- accelerate: Hardware optimization
- datasets: Data loading and manipulation
- tensorboard, matplotlib: Visualization and reporting
Installation
Recommended: use a virtual environment, then install dependencies via pip:
pip install -r requirements.txtrequirements.txt
torch
unsloth
trl
peft
accelerate
bitsandbytes
datasets
transformers
tensorboard
matplotlib
🔗 Related Documents
Last Updated: 2026-04-29
Version: 1.0
Status: Complete