Lossless Hardening with $\partial\mathbb{B}$ Nets

Abstract

$\partial\mathbb{B}$ nets are differentiable neural networks that learn discrete boolean-valued functions by gradient descent. $\partial\mathbb{B}$ nets have two semantically equivalent aspects: a differentiable soft-net, with real weights, and a non-differentiable hard-net, with boolean weights. We train the soft-net by backpropagation and then "harden" the learned weights to yield boolean weights that bind with the hard-net. The result is a learned discrete function. Unlike existing approaches to neural network binarization the "hardening" operation involves no loss of accuracy. Preliminary experiments demonstrate that $\partial\mathbb{B}$ nets achieve comparable performance on standard machine learning problems yet are compact (due to 1-bit weights) and interpretable (due to the logical nature of the learnt functions).

Cite

Text

Wright. "Lossless Hardening with $\partial\mathbb{B}$ Nets." ICML 2023 Workshops: Differentiable_Almost_Everything, 2023.

Markdown

[Wright. "Lossless Hardening with $\partial\mathbb{B}$ Nets." ICML 2023 Workshops: Differentiable_Almost_Everything, 2023.](https://mlanthology.org/icmlw/2023/wright2023icmlw-lossless/)

BibTeX

@inproceedings{wright2023icmlw-lossless,
  title     = {{Lossless Hardening with $\partial\mathbb{B}$ Nets}},
  author    = {Wright, Ian},
  booktitle = {ICML 2023 Workshops: Differentiable_Almost_Everything},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/wright2023icmlw-lossless/}
}