Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators

Abstract

The majority of the research on the quantization of Deep Neural Networks (DNNs) is focused on reducing the precision of tensors visible by high-level frameworks (e.g., weights, activations, and gradients). However, current hardware still relies on high-accuracy core operations. Most significant is the operation of accumulating products. This high-precision accumulation operation is gradually becoming the main computational bottleneck. This is because, so far, the usage of low-precision accumulators led to a significant degradation in performance. In this work, we present a simple method to train and fine-tune DNNs, to allow, for the first time, utilization of cheaper, $12$-bits accumulators, with no significant degradation in accuracy. Lastly, we show that as we decrease the accumulation precision further, using fine-grained gradient approximations can improve the DNN accuracy.

Cite

Text

Blumenfeld et al. "Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators." International Conference on Learning Representations, 2024.

Markdown

[Blumenfeld et al. "Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/blumenfeld2024iclr-cheaper/)

BibTeX

@inproceedings{blumenfeld2024iclr-cheaper,
  title     = {{Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators}},
  author    = {Blumenfeld, Yaniv and Hubara, Itay and Soudry, Daniel},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/blumenfeld2024iclr-cheaper/}
}