Perceptron Learning of SAT

Abstract

Boolean satisfiability (SAT) as a canonical NP-complete decision problem is one of the most important problems in computer science. In practice, real-world SAT sentences are drawn from a distribution that may result in efficient algorithms for their solution. Such SAT instances are likely to have shared characteristics and substructures. This work approaches the exploration of a family of SAT solvers as a learning problem. In particular, we relate polynomial time solvability of a SAT subset to a notion of margin between sentences mapped by a feature function into a Hilbert space. Provided this mapping is based on polynomial time computable statistics of a sentence, we show that the existance of a margin between these data points implies the existance of a polynomial time solver for that SAT subset based on the Davis-Putnam-Logemann-Loveland algorithm. Furthermore, we show that a simple perceptron-style learning rule will find an optimal SAT solver with a bounded number of training updates. We derive a linear time computable set of features and show analytically that margins exist for important polynomial special cases of SAT. Empirical results show an order of magnitude improvement over a state-of-the-art SAT solver on a hardware verification task.

Cite

Text

Flint and Blaschko. "Perceptron Learning of SAT." Neural Information Processing Systems, 2012.

Markdown

[Flint and Blaschko. "Perceptron Learning of SAT." Neural Information Processing Systems, 2012.](https://mlanthology.org/neurips/2012/flint2012neurips-perceptron/)

BibTeX

@inproceedings{flint2012neurips-perceptron,
  title     = {{Perceptron Learning of SAT}},
  author    = {Flint, Alex and Blaschko, Matthew},
  booktitle = {Neural Information Processing Systems},
  year      = {2012},
  pages     = {2771-2779},
  url       = {https://mlanthology.org/neurips/2012/flint2012neurips-perceptron/}
}