Differentiable Weightless Neural Networks

Abstract

We introduce the Differentiable Weightless Neural Network (DWN), a model based on interconnected lookup tables. Training of DWNs is enabled by a novel Extended Finite Difference technique for approximate differentiation of binary values. We propose Learnable Mapping, Learnable Reduction, and Spectral Regularization to further improve the accuracy and efficiency of these models. We evaluate DWNs in three edge computing contexts: (1) an FPGA-based hardware accelerator, where they demonstrate superior latency, throughput, energy efficiency, and model area compared to state-of-the-art solutions, (2) a low-power microcontroller, where they achieve preferable accuracy to XGBoost while subject to stringent memory constraints, and (3) ultra-low-cost chips, where they consistently outperform small models in both accuracy and projected hardware area. DWNs also compare favorably against leading approaches for tabular datasets, with higher average rank. Overall, our work positions DWNs as a pioneering solution for edge-compatible high-throughput neural networks.

Cite

Text

Bacellar et al. "Differentiable Weightless Neural Networks." International Conference on Machine Learning, 2024.

Markdown

[Bacellar et al. "Differentiable Weightless Neural Networks." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/bacellar2024icml-differentiable/)

BibTeX

@inproceedings{bacellar2024icml-differentiable,
  title     = {{Differentiable Weightless Neural Networks}},
  author    = {Bacellar, Alan Tendler Leibel and Susskind, Zachary and Breternitz Jr, Mauricio and John, Eugene and John, Lizy Kurian and Lima, Priscila Machado Vieira and França, Felipe M.G.},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {2277-2295},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/bacellar2024icml-differentiable/}
}