Online Regression Competitive with Changing Predictors
Abstract
This paper deals with the problem of making predictions in the online mode of learning where the dependence of the outcome y _ t on the signal x_ t can change with time. The Aggregating Algorithm (AA) is a technique that optimally merges experts from a pool, so that the resulting strategy suffers a cumulative loss that is almost as good as that of the best expert in the pool. We apply the AA to the case where the experts are all the linear predictors that can change with time. KAARCh is the kernel version of the resulting algorithm. In the kernel case, the experts are all the decision rules in some reproducing kernel Hilbert space that can change over time. We show that KAARCh suffers a cumulative square loss that is almost as good as that of any expert that does not change very rapidly.
Cite
Text
Busuttil and Kalnishkan. "Online Regression Competitive with Changing Predictors." International Conference on Algorithmic Learning Theory, 2007. doi:10.1007/978-3-540-75225-7_17Markdown
[Busuttil and Kalnishkan. "Online Regression Competitive with Changing Predictors." International Conference on Algorithmic Learning Theory, 2007.](https://mlanthology.org/alt/2007/busuttil2007alt-online/) doi:10.1007/978-3-540-75225-7_17BibTeX
@inproceedings{busuttil2007alt-online,
title = {{Online Regression Competitive with Changing Predictors}},
author = {Busuttil, Steven and Kalnishkan, Yuri},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2007},
pages = {181-195},
doi = {10.1007/978-3-540-75225-7_17},
url = {https://mlanthology.org/alt/2007/busuttil2007alt-online/}
}