Online Learning with Predictable Sequences
Abstract
We present methods for online linear optimization that take advantage of benign (as opposed to worst-case) sequences. Specically if the sequence encountered by the learner is described well by a known \predictable process", the algorithms presented enjoy tighter bounds as compared to the typical worst case bounds. Additionally, the methods achieve the usual worst-case regret bounds if the sequence is not benign. Our approach can be seen as a way of adding prior knowledge about the sequence within the paradigm of online learning. The setting is shown to encompass partial and side information. Variance and path-length bounds [11, 9] can be seen as particular examples of online learning with simple predictable sequences. We further extend our methods and results to include competing with a set of possible predictable processes (models), that is \learning" the predictable process itself concurrently with using it to obtain better regret guarantees. We show that such model selection is possible under various assumptions on the available feedback. Our results suggest a promising direction of further research with potential applications to stock market and time series prediction.
Cite
Text
Rakhlin and Sridharan. "Online Learning with Predictable Sequences." Annual Conference on Computational Learning Theory, 2013.Markdown
[Rakhlin and Sridharan. "Online Learning with Predictable Sequences." Annual Conference on Computational Learning Theory, 2013.](https://mlanthology.org/colt/2013/rakhlin2013colt-online/)BibTeX
@inproceedings{rakhlin2013colt-online,
title = {{Online Learning with Predictable Sequences}},
author = {Rakhlin, Alexander and Sridharan, Karthik},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2013},
pages = {993-1019},
url = {https://mlanthology.org/colt/2013/rakhlin2013colt-online/}
}