Generality's Price: Inescapable Deficiencies in Machine-Learned Programs
Abstract
This paper investigates some delicate tradeoffs between the generality of an algorithmic learning device and the quality of the programs it learns successfully. There are results to the effect that, thanks to small increases in generality of a learning device, the computational complexity of some successfully learned programs is provably unalterably sub optimal. There are also results in which the complexity of successfully learned programs is asymptotically optimal and the learning device is general, but, still thanks to the generality, some of those optimal, learned programs are provably unalterably information deficient — in some cases, deficient as to safe, algorithmic extractability/provability of the fact that they are even approximately optimal. The paper is on the borderline between learning theory, complexity theory and logic.
Cite
Text
Case et al. "Generality's Price: Inescapable Deficiencies in Machine-Learned Programs." Annual Conference on Computational Learning Theory, 2003. doi:10.1007/978-3-540-45167-9_50Markdown
[Case et al. "Generality's Price: Inescapable Deficiencies in Machine-Learned Programs." Annual Conference on Computational Learning Theory, 2003.](https://mlanthology.org/colt/2003/case2003colt-generality/) doi:10.1007/978-3-540-45167-9_50BibTeX
@inproceedings{case2003colt-generality,
title = {{Generality's Price: Inescapable Deficiencies in Machine-Learned Programs}},
author = {Case, John and Chen, Keh-Jiann and Jain, Sanjay and Merkle, Wolfgang and Royer, James S.},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2003},
pages = {684-698},
doi = {10.1007/978-3-540-45167-9_50},
url = {https://mlanthology.org/colt/2003/case2003colt-generality/}
}