A Second-Order Bound with Excess Losses
Abstract
We study online aggregation of the predictions of experts, and first show new second-order regret bounds in the standard setting, which are obtained via a version of the Prod algorithm (and also a version of the polynomially weighted average algorithm) with multiple learning rates. These bounds are in terms of excess losses, the differences between the instantaneous losses suffered by the algorithm and the ones of a given expert. We then demonstrate the interest of these bounds in the context of experts that report their confidences as a number in the interval [0; 1] using a generic reduction to the standard setting. We conclude by two other applications in the standard setting, which improve the known bounds in case of small excess losses and show a bounded regret against i.i.d. sequences of losses.
Cite
Text
Gaillard et al. "A Second-Order Bound with Excess Losses." Annual Conference on Computational Learning Theory, 2014.Markdown
[Gaillard et al. "A Second-Order Bound with Excess Losses." Annual Conference on Computational Learning Theory, 2014.](https://mlanthology.org/colt/2014/gaillard2014colt-second/)BibTeX
@inproceedings{gaillard2014colt-second,
title = {{A Second-Order Bound with Excess Losses}},
author = {Gaillard, Pierre and Stoltz, Gilles and van Erven, Tim},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2014},
pages = {176-196},
url = {https://mlanthology.org/colt/2014/gaillard2014colt-second/}
}