Competing with Wild Prediction Rules
Abstract
We consider the problem of on-line prediction competitive with a benchmark class of continuous but highly irregular prediction rules. It is known that if the benchmark class is a reproducing kernel Hilbert space, there exists a prediction algorithm whose average loss over the first N examples does not exceed the average loss of any prediction rule in the class plus a “regret term” of O ( N ^− 1/2). The elements of some natural benchmark classes, however, are so irregular that these classes are not Hilbert spaces. In this paper we develop Banach-space methods to construct a prediction algorithm with a regret term of O ( N $^{\rm -1/{\it p}}$ ), where p ∈(2,∞) and p –2 reflects the degree to which the benchmark class fails to be a Hilbert space.
Cite
Text
Vovk. "Competing with Wild Prediction Rules." Annual Conference on Computational Learning Theory, 2006. doi:10.1007/11776420_41Markdown
[Vovk. "Competing with Wild Prediction Rules." Annual Conference on Computational Learning Theory, 2006.](https://mlanthology.org/colt/2006/vovk2006colt-competing/) doi:10.1007/11776420_41BibTeX
@inproceedings{vovk2006colt-competing,
title = {{Competing with Wild Prediction Rules}},
author = {Vovk, Vladimir},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2006},
pages = {559-573},
doi = {10.1007/11776420_41},
url = {https://mlanthology.org/colt/2006/vovk2006colt-competing/}
}