Hardness Results for Learning First-Order Representations and Programming by Demonstration

Abstract

Learning from “structured examples” is necessary in a number of settings, including inductive logic programming. Here we analyze a simple learning problem in which examples have non-trivial structure: specifically, a learning problem in which concepts are strings over a fixed alphabet, examples are deterministic finite automata (DFAs), and a string represents the set of all DFAs that accept it. We show that solving this “dual” DFA learning problem is hard, under cryptographic assumptions. This result implies the hardness of several other more natural learning problems, including learning the description logic CLASSSIC from subconcepts, and learning arity-two “determinate” function-free Prolog clauses from ground clauses. The result also implies the hardness of two formal problems related to the area of “programming by demonstration”: learning straightline programs over a fixed operator set from input-output pairs, and learning straightline programs from input-output pairs and “partial traces”.

Cite

Text

Cohen. "Hardness Results for Learning First-Order Representations and Programming by Demonstration." Machine Learning, 1998. doi:10.1023/A:1007406511732

Markdown

[Cohen. "Hardness Results for Learning First-Order Representations and Programming by Demonstration." Machine Learning, 1998.](https://mlanthology.org/mlj/1998/cohen1998mlj-hardness/) doi:10.1023/A:1007406511732

BibTeX

@article{cohen1998mlj-hardness,
  title     = {{Hardness Results for Learning First-Order Representations and Programming by Demonstration}},
  author    = {Cohen, William W.},
  journal   = {Machine Learning},
  year      = {1998},
  pages     = {57-87},
  doi       = {10.1023/A:1007406511732},
  volume    = {30},
  url       = {https://mlanthology.org/mlj/1998/cohen1998mlj-hardness/}
}