ShiftAddNet: A Hardware-Inspired Deep Network
Abstract
Multiplication (e.g., convolution) is arguably a cornerstone of modern deep neural networks (DNNs). However, intensive multiplications cause expensive resource costs that challenge DNNs' deployment on resource-constrained edge devices, driving several attempts for multiplication-less deep networks. This paper presented ShiftAddNet, whose main inspiration is drawn from a common practice in energy-efficient hardware implementation, that is, multiplication can be instead performed with additions and logical bit-shifts. We leverage this idea to explicitly parameterize deep networks in this way, yielding a new type of deep network that involves only bit-shift and additive weight layers. This hardware-inspired ShiftAddNet immediately leads to both energy-efficient inference and training, without compromising the expressive capacity compared to standard DNNs. The two complementary operation types (bit-shift and add) additionally enable finer-grained control of the model's learning capacity, leading to more flexible trade-off between accuracy and (training) efficiency, as well as improved robustness to quantization and pruning. We conduct extensive experiments and ablation studies, all backed up by our FPGA-based ShiftAddNet implementation and energy measurements. Compared to existing DNNs or other multiplication-less models, ShiftAddNet aggressively reduces over 80% hardware-quantified energy cost of DNNs training and inference, while offering comparable or better accuracies. Codes and pre-trained models are available at https://github.com/RICE-EIC/ShiftAddNet.
Cite
Text
You et al. "ShiftAddNet: A Hardware-Inspired Deep Network." Neural Information Processing Systems, 2020.Markdown
[You et al. "ShiftAddNet: A Hardware-Inspired Deep Network." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/you2020neurips-shiftaddnet/)BibTeX
@inproceedings{you2020neurips-shiftaddnet,
title = {{ShiftAddNet: A Hardware-Inspired Deep Network}},
author = {You, Haoran and Chen, Xiaohan and Zhang, Yongan and Li, Chaojian and Li, Sicheng and Liu, Zihao and Wang, Zhangyang and Lin, Yingyan},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/you2020neurips-shiftaddnet/}
}