Convolutional Differentiable Logic Gate Networks

Abstract

With the increasing inference cost of machine learning models, there is a growing interest in models with fast and efficient inference. Recently, an approach for learning logic gate networks directly via a differentiable relaxation was proposed. Logic gate networks are faster than conventional neural network approaches because their inference only requires logic gate operators such as NAND, OR, and XOR, which are the underlying building blocks of current hardware and can be efficiently executed. We build on this idea, extending it by deep logic gate tree convolutions, logical OR pooling, and residual initializations. This allows scaling logic gate networks up by over one order of magnitude and utilizing the paradigm of convolution. On CIFAR-10, we achieve an accuracy of 86.29% using only 61 million logic gates, which improves over the SOTA while being 29x smaller.

Cite

Text

Petersen et al. "Convolutional Differentiable Logic Gate Networks." Neural Information Processing Systems, 2024. doi:10.52202/079017-3851

Markdown

[Petersen et al. "Convolutional Differentiable Logic Gate Networks." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/petersen2024neurips-convolutional/) doi:10.52202/079017-3851

BibTeX

@inproceedings{petersen2024neurips-convolutional,
  title     = {{Convolutional Differentiable Logic Gate Networks}},
  author    = {Petersen, Felix and Kuehne, Hilde and Borgelt, Christian and Welzel, Julian and Ermon, Stefano},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-3851},
  url       = {https://mlanthology.org/neurips/2024/petersen2024neurips-convolutional/}
}