Data-Dependent Bounds for Online Portfolio Selection Without Lipschitzness and Smoothness
Abstract
This work introduces the first small-loss and gradual-variation regret bounds for online portfolio selection, marking the first instances of data-dependent bounds for online convex optimization with non-Lipschitz, non-smooth losses. The algorithms we propose exhibit sublinear regret rates in the worst cases and achieve logarithmic regrets when the data is "easy," with per-round time almost linear in the number of investment alternatives. The regret bounds are derived using novel smoothness characterizations of the logarithmic loss, a local norm-based analysis of following the regularized leader (FTRL) with self-concordant regularizers, which are not necessarily barriers, and an implicit variant of optimistic FTRL with the log-barrier.
Cite
Text
Tsai et al. "Data-Dependent Bounds for Online Portfolio Selection Without Lipschitzness and Smoothness." Neural Information Processing Systems, 2023.Markdown
[Tsai et al. "Data-Dependent Bounds for Online Portfolio Selection Without Lipschitzness and Smoothness." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/tsai2023neurips-datadependent/)BibTeX
@inproceedings{tsai2023neurips-datadependent,
title = {{Data-Dependent Bounds for Online Portfolio Selection Without Lipschitzness and Smoothness}},
author = {Tsai, Chung-En and Lin, Ying-Ting and Li, Yen-Huan},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/tsai2023neurips-datadependent/}
}