Differentiable MaxSAT Message Passing
Abstract
The message-passing principle is used in the most popular neural networks for graph-structured data. However, current message-passing approaches use black-box neural models that transform features over continuous domain, thus limiting the description capability of GNNs. In this work, we explore a novel type of message passing based on a differentiable satisfiability solver. Our model learns logical rules that encode which and how messages are passed from one node to another node. The rules are learned in a relaxed continuous space, which renders the training process end-to-end differentiable and thus enables standard gradient-based training. In our experiments we show that MAXSAT-MP learns arithmetic operations and that is on par with state-of-the-art GNNs on graph structured data.
Cite
Text
Alesiani et al. "Differentiable MaxSAT Message Passing." ICML 2023 Workshops: Differentiable_Almost_Everything, 2023.Markdown
[Alesiani et al. "Differentiable MaxSAT Message Passing." ICML 2023 Workshops: Differentiable_Almost_Everything, 2023.](https://mlanthology.org/icmlw/2023/alesiani2023icmlw-differentiable/)BibTeX
@inproceedings{alesiani2023icmlw-differentiable,
title = {{Differentiable MaxSAT Message Passing}},
author = {Alesiani, Francesco and Morbiducci, Cristóbal Corvalán and Zopf, Markus},
booktitle = {ICML 2023 Workshops: Differentiable_Almost_Everything},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/alesiani2023icmlw-differentiable/}
}