Toward Efficient Low-Precision Training: Data Format Optimization and Hysteresis Quantization
Abstract
As the complexity and size of deep neural networks continue to increase, low-precision training has been extensively studied in the last few years to reduce hardware overhead. Training performance is largely affected by the numeric formats representing different values in low-precision training, but finding an optimal format typically requires numerous training runs, which is a very time-consuming process. In this paper, we propose a method to efficiently find an optimal format for activations and errors without actual training. We employ this method to determine an 8-bit format suitable for training various models. In addition, we propose hysteresis quantization to suppress undesired fluctuation in quantized weights during training. This scheme enables deeply quantized training using 4-bit weights, exhibiting only 0.2% degradation for ResNet-18 trained on ImageNet.
Cite
Text
Lee et al. "Toward Efficient Low-Precision Training: Data Format Optimization and Hysteresis Quantization." International Conference on Learning Representations, 2022.Markdown
[Lee et al. "Toward Efficient Low-Precision Training: Data Format Optimization and Hysteresis Quantization." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/lee2022iclr-efficient/)BibTeX
@inproceedings{lee2022iclr-efficient,
title = {{Toward Efficient Low-Precision Training: Data Format Optimization and Hysteresis Quantization}},
author = {Lee, Sunwoo and Park, Jeongwoo and Jeon, Dongsuk},
booktitle = {International Conference on Learning Representations},
year = {2022},
url = {https://mlanthology.org/iclr/2022/lee2022iclr-efficient/}
}