Gains and Losses Are Fundamentally Different in Regret Minimization: The Sparse Case
Abstract
We demonstrate that, in the classical non-stochastic regret minimization problem with $d$ decisions, gains and losses to be respectively maximized or minimized are fundamentally different. Indeed, by considering the additional sparsity assumption (at each stage, at most $s$ decisions incur a nonzero outcome), we derive optimal regret bounds of different orders. Specifically, with gains, we obtain an optimal regret guarantee after $T$ stages of order $\sqrt{T\log s}$, so the classical dependency in the dimension is replaced by the sparsity size. With losses, we provide matching upper and lower bounds of order $\sqrt{Ts\log(d)/d}$, which is decreasing in $d$. Eventually, we also study the bandit setting, and obtain an upper bound of order $\sqrt{Ts\log (d/s)}$ when outcomes are losses. This bound is proven to be optimal up to the logarithmic factor $\sqrt{\log(d/s)}$.
Cite
Text
Kwon and Perchet. "Gains and Losses Are Fundamentally Different in Regret Minimization: The Sparse Case." Journal of Machine Learning Research, 2016.Markdown
[Kwon and Perchet. "Gains and Losses Are Fundamentally Different in Regret Minimization: The Sparse Case." Journal of Machine Learning Research, 2016.](https://mlanthology.org/jmlr/2016/kwon2016jmlr-gains/)BibTeX
@article{kwon2016jmlr-gains,
title = {{Gains and Losses Are Fundamentally Different in Regret Minimization: The Sparse Case}},
author = {Kwon, Joon and Perchet, Vianney},
journal = {Journal of Machine Learning Research},
year = {2016},
pages = {1-32},
volume = {17},
url = {https://mlanthology.org/jmlr/2016/kwon2016jmlr-gains/}
}