โ Home
Navigate
๐ค Aria
๐ฌ Chat
๐ Dashboard
๐ Store
Dashboard
โบ
Training
QAI Training Dashboard
โ Connected
๐ Refresh
Auto (5s)
๐ Dark
๐ Export
Loading...
๐ Overview
๐ง Jobs
๐ Models
๐ Datasets
โ๏ธ Configs
๐ New Training
โ
Use Ctrl/Cmd+K to focus. Press Enter to open the best tab match.
Clear
๐ฎ GPU Status
Loading GPU data...
๐ป System Resources
Loading system data...
-
Completed
-
Running
-
Queued
-
Failed
-
Avg Time
-
Best Score
๐ Perplexity Progress
โฑ๏ธ Training Duration
๐ Running
0
No jobs running
โ Completed
0
No completed jobs
๐ Top Models
Loading models...
๐ All Training Jobs
๐
Loading all jobs...
๐ Model Performance Leaderboard
Loading models...
๐ Available Datasets
Loading datasets...
โ๏ธ Training Configurations
Loading configs...
๐ Start New Training Job
Job Name
*
Use lowercase, underscores, no spaces
Model
*
๐ค Phi-3.5-mini-instruct (3.8B params)
๐ค Qwen2.5-3B-Instruct (3B params)
Fast training, good for chat tasks
Dataset
*
โณ Loading datasets...
Select a dataset
Epochs
More epochs = better learning
Max Train Samples
-1 for all samples
Learning Rate
Default: 2e-4
๐ง Advanced Options
โผ
Batch Size
1 (Slow, less memory)
2 (Balanced)
4 (Fast, more memory)
8 (Very fast, high memory)
Gradient Accumulation
Simulate larger batches
Warmup Steps
LR warmup period
LoRA Rank
Higher = more parameters
LoRA Alpha
Usually 2x rank
LoRA Dropout
0.0 to 0.5 typical
Weight Decay
L2 regularization
Max Grad Norm
Gradient clipping
Random Seed
For reproducibility
Enable Evaluation
Max Eval Samples
Eval Steps
โฑ๏ธ Estimated Time:
~5 minutes
๐พ Est. VRAM:
~4GB
๐ Start Training
๐พ Save Config
๐ Load Config
๐ Reset
โก Quick Presets:
โก Quick Test (1 epoch, 100 samples)
๐งช Tuning Wizard
๐ Standard (3 epochs, 1k samples)
๐ Full Training (5 epochs, all samples)
๐ Production (10 epochs, all samples)
โจ๏ธ Shortcuts
โจ๏ธ Keyboard Shortcuts
Toggle Auto-Refresh
A
Toggle Dark Mode
D
Refresh Data
R
Export Report
E
Overview Tab
1
Jobs Tab
2
Models Tab
3
Search Jobs
/
Command Bar
Ctrl/Cmd+K
Close Modal
ESC
Close