Extracting Certainty from Uncertainty: Regret Bounded by Variation in Costs
Abstract
Prediction from expert advice is a fundamental problem in machine learning. A major pillar of the field is the existence of learning algorithms whose average loss approaches that of the best expert in hindsight (in other words, whose average regret approaches zero). Traditionally the regret of online algorithms was bounded in terms of the number of prediction rounds. Cesa-Bianchi, Mansour and Stoltz (Mach. Learn. 66(2–3):21–352, 2007 ) posed the question whether it is be possible to bound the regret of an online algorithm by the variation of the observed costs. In this paper we resolve this question, and prove such bounds in the fully adversarial setting, in two important online learning scenarios: prediction from expert advice, and online linear optimization.
Cite
Text
Hazan and Kale. "Extracting Certainty from Uncertainty: Regret Bounded by Variation in Costs." Annual Conference on Computational Learning Theory, 2008. doi:10.1007/s10994-010-5175-xMarkdown
[Hazan and Kale. "Extracting Certainty from Uncertainty: Regret Bounded by Variation in Costs." Annual Conference on Computational Learning Theory, 2008.](https://mlanthology.org/colt/2008/hazan2008colt-extracting/) doi:10.1007/s10994-010-5175-xBibTeX
@inproceedings{hazan2008colt-extracting,
title = {{Extracting Certainty from Uncertainty: Regret Bounded by Variation in Costs}},
author = {Hazan, Elad and Kale, Satyen},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2008},
pages = {57-68},
doi = {10.1007/s10994-010-5175-x},
url = {https://mlanthology.org/colt/2008/hazan2008colt-extracting/}
}