Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States
Abstract
We revisit the classical online portfolio selection problem. It is widely assumed that a trade-off between computational complexity and regret is unavoidable, with Cover’s Universal Portfolios algorithm, SOFT-BAYES and ADA-BARRONS currently constituting its state-of-the-art Pareto frontier. In this paper, we present the first efficient algorithm, BISONS, that obtains polylogarithmic regret with memory and per-step running time requirements that are polynomial in the dimension, displacing ADA-BARRONS from the Pareto frontier. Additionally, we resolve a COLT 2020 open problem by showing that a certain Follow-The-Regularized-Leader algorithm with log-barrier regularization suffers an exponentially larger dependence on the dimension than previously conjectured. Thus, we rule out this algorithm as a candidate for the Pareto frontier. We also extend our algorithm and analysis to a more general problem than online portfolio selection, viz. online learning of quantum states with log loss. This algorithm, called SCHRODINGER’S-BISONS, ibs the first efficient algorithm with polylogarithmic regret for this more general problem.
Cite
Text
Zimmert et al. "Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States." Conference on Learning Theory, 2022.Markdown
[Zimmert et al. "Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States." Conference on Learning Theory, 2022.](https://mlanthology.org/colt/2022/zimmert2022colt-pushing/)BibTeX
@inproceedings{zimmert2022colt-pushing,
title = {{Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States}},
author = {Zimmert, Julian and Agarwal, Naman and Kale, Satyen},
booktitle = {Conference on Learning Theory},
year = {2022},
pages = {182-226},
volume = {178},
url = {https://mlanthology.org/colt/2022/zimmert2022colt-pushing/}
}