Energy-Efficient ConvNets Through Approximate Computing
Abstract
Recently convolutional neural networks (ConvNets) have come up as state-of-the-art classification and detection algorithms, achieving near-human performance in visual detection. However, ConvNet algorithms are typically very computation and memory intensive. In order to be able to embed ConvNet-based classification into wearable platforms and embedded systems such as smartphones or ubiquitous electronics for the internet-of-things, their energy consumption should be reduced drastically. This paper proposes methods based on approximate computing to reduce energy consumption in state-of-the-art ConvNet accelerators. By combining techniques both at the system- and circuit level, we can gain energy in the systems arithmetic: up to 30× without losing classification accuracy and more than 100× at 99% classification accuracy, compared to the commonly used 16-bit fixed point number format.
Cite
Text
Moons et al. "Energy-Efficient ConvNets Through Approximate Computing." IEEE/CVF Winter Conference on Applications of Computer Vision, 2016. doi:10.1109/WACV.2016.7477614Markdown
[Moons et al. "Energy-Efficient ConvNets Through Approximate Computing." IEEE/CVF Winter Conference on Applications of Computer Vision, 2016.](https://mlanthology.org/wacv/2016/moons2016wacv-energy/) doi:10.1109/WACV.2016.7477614BibTeX
@inproceedings{moons2016wacv-energy,
title = {{Energy-Efficient ConvNets Through Approximate Computing}},
author = {Moons, Bert and De Brabandere, Bert and Van Gool, Luc and Verhelst, Marian},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2016},
pages = {1-8},
doi = {10.1109/WACV.2016.7477614},
url = {https://mlanthology.org/wacv/2016/moons2016wacv-energy/}
}