Competing with Wild Prediction Rules
Abstract
Abstract We consider the problem of on-line prediction competitive with a benchmark class of continuous but highly irregular prediction rules. It is known that if the benchmark class is a reproducing kernel Hilbert space, there exists a prediction algorithm whose average loss over the first N examples does not exceed the average loss of any prediction rule in the class plus a “regret term” of O(N−1/2). The elements of some natural benchmark classes, however, are so irregular that these classes are not Hilbert spaces. In this paper we develop Banach-space methods to construct a prediction algorithm with a regret term of O(N−1/p), where p∈[2,∞) and p−2 reflects the degree to which the benchmark class fails to be a Hilbert space. Only the square loss function is considered.
Cite
Text
Vovk. "Competing with Wild Prediction Rules." Machine Learning, 2007. doi:10.1007/S10994-007-5021-YMarkdown
[Vovk. "Competing with Wild Prediction Rules." Machine Learning, 2007.](https://mlanthology.org/mlj/2007/vovk2007mlj-competing/) doi:10.1007/S10994-007-5021-YBibTeX
@article{vovk2007mlj-competing,
title = {{Competing with Wild Prediction Rules}},
author = {Vovk, Vladimir},
journal = {Machine Learning},
year = {2007},
pages = {193-212},
doi = {10.1007/S10994-007-5021-Y},
volume = {69},
url = {https://mlanthology.org/mlj/2007/vovk2007mlj-competing/}
}