SWALP : Stochastic Weight Averaging in Low Precision Training
Abstract
Low precision operations can provide scalability, memory savings, portability, and energy efficiency. This paper proposes SWALP, an approach to low precision training that averages low-precision SGD iterates with a modified learning rate schedule. SWALP is easy to implement and can match the performance of full-precision SGD even with all numbers quantized down to 8 bits, including the gradient accumulators. Additionally, we show that SWALP converges arbitrarily close to the optimal solution for quadratic objectives, and to a noise ball asymptotically smaller than low precision SGD in strongly convex settings.
Cite
Text
Yang et al. "SWALP : Stochastic Weight Averaging in Low Precision Training." International Conference on Machine Learning, 2019.Markdown
[Yang et al. "SWALP : Stochastic Weight Averaging in Low Precision Training." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/yang2019icml-swalp/)BibTeX
@inproceedings{yang2019icml-swalp,
title = {{SWALP : Stochastic Weight Averaging in Low Precision Training}},
author = {Yang, Guandao and Zhang, Tianyi and Kirichenko, Polina and Bai, Junwen and Wilson, Andrew Gordon and De Sa, Chris},
booktitle = {International Conference on Machine Learning},
year = {2019},
pages = {7015-7024},
volume = {97},
url = {https://mlanthology.org/icml/2019/yang2019icml-swalp/}
}