Training Configuration
Training Status
No training in progress
Training Sessions
Loading checkpoints...
Training Session
-
Iteration
-
Train Loss
-
Val Loss
-
Perplexity
-
Tokens/sec
-
Trained Tokens
-
Epoch
-
Memory (GB)
-
Learning Rate
-
Warmup Steps
-
LR Decay
-
Weight Decay
Training Progress
E.Time: --
R.Time: --
0%
Top 3 Best Checkpoints
No training data available
All Available Checkpoints
This list shows all available checkpoints for the selected training session.
Training Sessions
Important: only compare sessions trained on the same dataset.
Loading training sessions...
Training Analysis Dashboard
Training Session Analysis
Select one or more training sessions from the left panel to view their performance metrics and compare results.
Available Analysis:
- Loss Curves
- Perplexity Evolution
- Loss Stability
- Generalization Gap
Tips:
- View single session metrics
- Compare multiple sessions
- Use same training dataset
- Review hyperparameters
Model Fusion
Fusion Progress
Ready for Fusion
Select a trained adapter, then click "Start Fusion" to begin. The base model will be automatically detected.
Model Quantization
Quantized Models
No quantized models yet
Quantization Progress
Ready for Quantization
Select a model and click "Start Quantization" to begin
Quantization Tips
4-bit vs 8-bit: 4-bit provides better compression (~75% size reduction)
but slightly lower quality. 8-bit offers better quality with ~50% compression.
Group Size: Smaller groups (32) provide better quality but larger files.
64 is the recommended balance.
Performance: Quantized models are faster to load and use less memory,
making them ideal for deployment.
Compatibility: Quantized models work with the same inference code
as the original models.
Model Configuration
Generation Output
Model ready! Type a message below to start the conversation.