Conservative Online Convex Optimization
Abstract
Online learning algorithms often have the issue of exhibiting poor performance during the initial stages of the optimization procedure, which in practical applications might dissuade potential users from deploying such solutions. In this paper, we study a novel setting, namely conservative online convex optimization , in which we are optimizing a sequence of convex loss functions under the constraint that we have to perform at least as well as a known default strategy throughout the entire learning process, a.k.a. conservativeness constraint. To address this problem we design a meta-algorithm, namely Conservative Projection (CP), that converts any no-regret algorithm for online convex optimization into one that, at the same time, satisfies the conservativeness constraint and maintains the same regret order. Finally, we run an extensive experimental campaign, comparing and analyzing the performance of our meta-algorithm with that of state-of-the-art algorithms.
Cite
Text
de Luca et al. "Conservative Online Convex Optimization." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2021. doi:10.1007/978-3-030-86486-6_2Markdown
[de Luca et al. "Conservative Online Convex Optimization." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2021.](https://mlanthology.org/ecmlpkdd/2021/deluca2021ecmlpkdd-conservative/) doi:10.1007/978-3-030-86486-6_2BibTeX
@inproceedings{deluca2021ecmlpkdd-conservative,
title = {{Conservative Online Convex Optimization}},
author = {de Luca, Martino Bernasconi and Vittori, Edoardo and Trovò, Francesco and Restelli, Marcello},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2021},
pages = {19-34},
doi = {10.1007/978-3-030-86486-6_2},
url = {https://mlanthology.org/ecmlpkdd/2021/deluca2021ecmlpkdd-conservative/}
}