Microchoice Bounds and Self Bounding Learning Algorithms

Abstract

A major topic in machine learning is to determine good upper bounds on the true error rates of learned hypotheses based upon their empirical performance on training data. In this paper, we demonstrate new adaptive bounds designed for learning algorithms that operate by making a sequence of choices. These bounds, which we call Microchoice bounds, are similar to Occam-style bounds and can be used to make learning algorithms self-bounding in the style of Freund (1998). We then show how to combine these bounds with Freund's query-tree approach producing a version of Freund's query-tree structure that can be implemented with much more algorithmic efficiency.

Cite

Text

Langford and Blum. "Microchoice Bounds and Self Bounding Learning Algorithms." Machine Learning, 2003. doi:10.1023/A:1022806918936

Markdown

[Langford and Blum. "Microchoice Bounds and Self Bounding Learning Algorithms." Machine Learning, 2003.](https://mlanthology.org/mlj/2003/langford2003mlj-microchoice/) doi:10.1023/A:1022806918936

BibTeX

@article{langford2003mlj-microchoice,
  title     = {{Microchoice Bounds and Self Bounding Learning Algorithms}},
  author    = {Langford, John and Blum, Avrim},
  journal   = {Machine Learning},
  year      = {2003},
  pages     = {165-179},
  doi       = {10.1023/A:1022806918936},
  volume    = {51},
  url       = {https://mlanthology.org/mlj/2003/langford2003mlj-microchoice/}
}