Unit Scaling: Out-of-the-Box Low-Precision Training
Abstract
We present unit scaling, a paradigm for designing deep learning models that simplifies the use of low-precision number formats. Training in FP16 or the recently proposed FP8 formats offers substantial efficiency gains, but can lack sufficient range for out-of-the-box training. Unit scaling addresses this by introducing a principled approach to model numerics: seeking unit variance of all weights, activations and gradients at initialisation. Unlike alternative methods, this approach neither requires multiple training runs to find a suitable scale nor has significant computational overhead. We demonstrate the efficacy of unit scaling across a range of models and optimisers. We further show that existing models can be adapted to be unit-scaled, training BERT-Large in FP16 and then FP8 with no degradation in accuracy.
Cite
Text
Blake et al. "Unit Scaling: Out-of-the-Box Low-Precision Training." International Conference on Machine Learning, 2023.Markdown
[Blake et al. "Unit Scaling: Out-of-the-Box Low-Precision Training." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/blake2023icml-unit/)BibTeX
@inproceedings{blake2023icml-unit,
title = {{Unit Scaling: Out-of-the-Box Low-Precision Training}},
author = {Blake, Charlie and Orr, Douglas and Luschi, Carlo},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {2548-2576},
volume = {202},
url = {https://mlanthology.org/icml/2023/blake2023icml-unit/}
}