Training Binary Neural Networks Using the Bayesian Learning Rule

Abstract

Neural networks with binary weights are computation-efficient and hardware-friendly, but their training is challenging because it involves a discrete optimization problem. Surprisingly, ignoring the discrete nature of the problem and using gradient-based methods, such as the Straight-Through Estimator, still works well in practice. This raises the question: are there principled approaches which justify such methods? In this paper, we propose such an approach using the Bayesian learning rule. The rule, when applied to estimate a Bernoulli distribution over the binary weights, results in an algorithm which justifies some of the algorithmic choices made by the previous approaches. The algorithm not only obtains state-of-the-art performance, but also enables uncertainty estimation and continual learning to avoid catastrophic forgetting. Our work provides a principled approach for training binary neural networks which also justifies and extends existing approaches.

Cite

Text

Meng et al. "Training Binary Neural Networks Using the Bayesian Learning Rule." International Conference on Machine Learning, 2020.

Markdown

[Meng et al. "Training Binary Neural Networks Using the Bayesian Learning Rule." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/meng2020icml-training/)

BibTeX

@inproceedings{meng2020icml-training,
  title     = {{Training Binary Neural Networks Using the Bayesian Learning Rule}},
  author    = {Meng, Xiangming and Bachmann, Roman and Khan, Mohammad Emtiyaz},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {6852-6861},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/meng2020icml-training/}
}