Small Is Beautiful: A Brute-Force Approach to Learning First-Order Formulas
Abstract
We describe a method for learning formulas in firstorder logic using a brute-force, smallest-first search. The method is exceedingly simple. It generates all irreducible well-formed formulas up to a fixed size and tests them against a set of examples. Although the method has some obvious limitations due to its computational complexity, it performs surprisingly well on some tasks. This paper describes experiments with two applications of the method in the Multi-tac system, a program synthesizer for constraint satisfaction problems. In the first application, axioms are learned, and in the second application, search control rules are learned. We describe these experiments, and consider why searching the space of small formulas makes sense in our applications. Introduction Most machine learning systems prefer smaller, simpler hypotheses to larger, more complex ones. This bias is a form of Occam's Razor. While Occam's razor has obvious aesthetic appeal, some researchers have attempted to ...
Cite
Text
Minton and Underwood. "Small Is Beautiful: A Brute-Force Approach to Learning First-Order Formulas." AAAI Conference on Artificial Intelligence, 1994.Markdown
[Minton and Underwood. "Small Is Beautiful: A Brute-Force Approach to Learning First-Order Formulas." AAAI Conference on Artificial Intelligence, 1994.](https://mlanthology.org/aaai/1994/minton1994aaai-small/)BibTeX
@inproceedings{minton1994aaai-small,
title = {{Small Is Beautiful: A Brute-Force Approach to Learning First-Order Formulas}},
author = {Minton, Steven and Underwood, Ian},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {1994},
pages = {168-174},
url = {https://mlanthology.org/aaai/1994/minton1994aaai-small/}
}