Cambier et al. "Shifted and Squeezed 8-Bit Floating Point Format for Low-Precision Training of Deep Neural Networks." International Conference on Learning Representations, 2020.
Markdown
[Cambier et al. "Shifted and Squeezed 8-Bit Floating Point Format for Low-Precision Training of Deep Neural Networks." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/cambier2020iclr-shifted/)
BibTeX
@inproceedings{cambier2020iclr-shifted,
title = {{Shifted and Squeezed 8-Bit Floating Point Format for Low-Precision Training of Deep Neural Networks}},
author = {Cambier, Léopold and Bhiwandiwalla, Anahita and Gong, Ting and Nekuii, Mehran and Elibol, Oguz H and Tang, Hanlin},
booktitle = {International Conference on Learning Representations},
year = {2020},
url = {https://mlanthology.org/iclr/2020/cambier2020iclr-shifted/}
}