The Hedge Algorithm on a Continuum
Abstract
We consider an online optimization problem on a subset S of R^n (not necessarily convex), in which a decision maker chooses, at each iteration t, a probability distribution x^(t) over S, and seeks to minimize a cumulative expected loss, where each loss is a Lipschitz function revealed at the end of iteration t. Building on previous work, we propose a generalized Hedge algorithm and show a O(\sqrt{t} \log t) bound on the regret when the losses are uniformly Lipschitz and S is uniformly fat (a weaker condition than convexity). Finally, we propose a generalization to the dual averaging method on the set of Lebesgue-continuous distributions over S.
Cite
Text
Krichene et al. "The Hedge Algorithm on a Continuum." International Conference on Machine Learning, 2015.Markdown
[Krichene et al. "The Hedge Algorithm on a Continuum." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/krichene2015icml-hedge/)BibTeX
@inproceedings{krichene2015icml-hedge,
title = {{The Hedge Algorithm on a Continuum}},
author = {Krichene, Walid and Balandat, Maximilian and Tomlin, Claire and Bayen, Alexandre},
booktitle = {International Conference on Machine Learning},
year = {2015},
pages = {824-832},
volume = {37},
url = {https://mlanthology.org/icml/2015/krichene2015icml-hedge/}
}