Conjugate Adder Net (CAddNet) - A Space-Efficient Approximate CNN
Abstract
The AdderNet was recently developed as a way to implement deep neural networks without needing multiplication operations to combine weights and inputs. Instead, absolute values of the difference between weights and inputs are used, greatly reducing the gate-level implementation complexity. Training of AdderNets is challenging, however, and the loss curves during training tend to fluctuate significantly. In this paper we propose the Conjugate Adder Network, or CAddNet, which uses the difference between the absolute values of conjugate pairs of inputs and the weights. We show that this can be implemented simply via a single minimum operation, resulting in a roughly 50% reduction in logic gate complexity as compared with AdderNets. The CAddNet method also stabilizes training as compared with AdderNets, yielding training curves similar to standard CNNs.
Cite
Text
Shen et al. "Conjugate Adder Net (CAddNet) - A Space-Efficient Approximate CNN." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00316Markdown
[Shen et al. "Conjugate Adder Net (CAddNet) - A Space-Efficient Approximate CNN." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/shen2022cvprw-conjugate/) doi:10.1109/CVPRW56347.2022.00316BibTeX
@inproceedings{shen2022cvprw-conjugate,
title = {{Conjugate Adder Net (CAddNet) - A Space-Efficient Approximate CNN}},
author = {Shen, Lulan and Ziaeefard, Maryam and Meyer, Brett H. and Gross, Warren J. and Clark, James J.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2022},
pages = {2792-2796},
doi = {10.1109/CVPRW56347.2022.00316},
url = {https://mlanthology.org/cvprw/2022/shen2022cvprw-conjugate/}
}