Strong Minimax Lower Bounds for Learning
Abstract
Minimax lower bounds for concept learning state, for example, that for each sample size n and learning rule gn , there exists a distribution of the observation X and a concept C to be learnt such that the expected error of gn is at least a constant times V/n, where V is the vc dimension of the concept class. However, these bounds do not tell anything about the rate of decrease of the error for a fixed distribution-concept pair.In this paper we investigate minimax lower bounds in such a (stronger) sense. We show that for several natural k-parameter concept classes, including the class of linear halfspaces, the class of balls, the class of polyhedra with a certain number of faces, and a class of neural networks, for any sequence of learning rules gn, there exists a fixed distribution of X and a fixed conceptC such that the expected error is larger than a constant timesk/n for infinitely manyn. We also obtain such strong minimax lower bounds for the tail distribution of the probability of error, which extend the corresponding minimax lower bounds.
Cite
Text
Antos and Lugosi. "Strong Minimax Lower Bounds for Learning." Machine Learning, 1998. doi:10.1023/A:1007454427662Markdown
[Antos and Lugosi. "Strong Minimax Lower Bounds for Learning." Machine Learning, 1998.](https://mlanthology.org/mlj/1998/antos1998mlj-strong/) doi:10.1023/A:1007454427662BibTeX
@article{antos1998mlj-strong,
title = {{Strong Minimax Lower Bounds for Learning}},
author = {Antos, András and Lugosi, Gábor},
journal = {Machine Learning},
year = {1998},
pages = {31-56},
doi = {10.1023/A:1007454427662},
volume = {30},
url = {https://mlanthology.org/mlj/1998/antos1998mlj-strong/}
}