Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization
Abstract
In this paper, we study adaptive online convex optimization, and aim to design a universal algorithm that achieves optimal regret bounds for multiple common types of loss functions. Existing universal methods are limited in the sense that they are optimal for only a subclass of loss functions. To address this limitation, we propose a novel online algorithm, namely Maler, which enjoys the optimal $O(\sqrt{T})$, $O(d\log T)$ and $O(\log T)$ regret bounds for general convex, exponentially concave, and strongly convex functions respectively. The essential idea is to run multiple types of learning algorithms with different learning rates in parallel, and utilize a meta-algorithm to track the best on the fly. Empirical results demonstrate the effectiveness of our method.
Cite
Text
Wang et al. "Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization." Uncertainty in Artificial Intelligence, 2019.Markdown
[Wang et al. "Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization." Uncertainty in Artificial Intelligence, 2019.](https://mlanthology.org/uai/2019/wang2019uai-adaptivity/)BibTeX
@inproceedings{wang2019uai-adaptivity,
title = {{Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization}},
author = {Wang, Guanghui and Lu, Shiyin and Zhang, Lijun},
booktitle = {Uncertainty in Artificial Intelligence},
year = {2019},
pages = {659-668},
volume = {115},
url = {https://mlanthology.org/uai/2019/wang2019uai-adaptivity/}
}