LAB: Learnable Activation Binarizer for Binary Neural Networks

Abstract

Binary Neural Networks (BNNs) are receiving an upsurge of attention for bringing power-hungry deep learning towards edge devices. The traditional wisdom in this space is to employ sign() for binarizing featuremaps. We argue and illustrate that sign() is a uniqueness bottleneck, limiting information propagation throughout the network. To alleviate this, we propose to dispense sign(), replacing it with a learnable activation binarizer (LAB), allowing the network to learn a fine-grained binarization kernel per layer - as opposed to global thresholding. LAB is a novel universal module that can seamlessly be integrated into existing architectures. To confirm this, we plug it into four seminal BNNs and show a considerable performance boost at the cost of tolerable increase in delay and complexity. Finally, we build an end-to-end BNN (coined as LAB-BNN) around LAB, and demonstrate that it achieves competitive performance on par with the state-of-the-art on ImageNet. Codebase in the supplementary will be made publicly available upon acceptance.

Cite

Text

Falkena et al. "LAB: Learnable Activation Binarizer for Binary Neural Networks." Winter Conference on Applications of Computer Vision, 2023.

Markdown

[Falkena et al. "LAB: Learnable Activation Binarizer for Binary Neural Networks." Winter Conference on Applications of Computer Vision, 2023.](https://mlanthology.org/wacv/2023/falkena2023wacv-lab/)

BibTeX

@inproceedings{falkena2023wacv-lab,
  title     = {{LAB: Learnable Activation Binarizer for Binary Neural Networks}},
  author    = {Falkena, Sieger and Jamali-Rad, Hadi and van Gemert, Jan},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2023},
  pages     = {6425-6434},
  url       = {https://mlanthology.org/wacv/2023/falkena2023wacv-lab/}
}