Fantastic Pretraining Optimizers and Where to Find Them
Paper
•
2509.02046
•
Published
•
13
sophia130m10B| Hyperparameter | Value |
|---|---|
| beta1 | 0.95 |
| beta2 | 0.99 |
| epsilon | 1e-07 |
| gamma | 0.0125 |
| learning_rate | 0.004 |
| max_grad_norm | 1 |
| min_lr_ratio | 0 |
| train_batch_size | 128 |
| warmup | 4000 |
| weight_decay | 0.2 |