Power Awareness in Low Precision Neural Networks

Abstract

Existing approaches for reducing DNN power consumption rely on quite general principles, including avoidance of multiplication operations and aggressive quantization of weights and activations. However, these methods do not consider the precise power consumed by each module in the network and are therefore not optimal. In this paper we develop accurate power consumption models for all arithmetic operations in the DNN, under various working conditions. We reveal several important factors that have been overlooked to date. Based on our analysis, we present PANN (power-aware neural network), a simple approach for approximating any full-precision network by a low-power fixed-precision variant. Our method can be applied to a pre-trained network and can also be used during training to achieve improved performance. Unlike previous methods, PANN incurs only a minor degradation in accuracy w.r.t. the full-precision version of the network and enables to seamlessly traverse the power-accuracy trade-off at deployment time.

Cite

Text

Spingarn-Eliezer et al. "Power Awareness in Low Precision Neural Networks." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25082-8_5

Markdown

[Spingarn-Eliezer et al. "Power Awareness in Low Precision Neural Networks." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/spingarneliezer2022eccvw-power/) doi:10.1007/978-3-031-25082-8_5

BibTeX

@inproceedings{spingarneliezer2022eccvw-power,
  title     = {{Power Awareness in Low Precision Neural Networks}},
  author    = {Spingarn-Eliezer, Nurit and Banner, Ron and Ben-Yaacov, Hilla and Hoffer, Elad and Michaeli, Tomer},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2022},
  pages     = {67-83},
  doi       = {10.1007/978-3-031-25082-8_5},
  url       = {https://mlanthology.org/eccvw/2022/spingarneliezer2022eccvw-power/}
}