Online Optimization with Memory and Competitive Control
Abstract
This paper presents competitive algorithms for a novel class of online optimization problems with memory. We consider a setting where the learner seeks to minimize the sum of a hitting cost and a switching cost that depends on the previous $p$ decisions. This setting generalizes Smoothed Online Convex Optimization. The proposed approach, Optimistic Regularized Online Balanced Descent, achieves a constant, dimension-free competitive ratio. Further, we show a connection between online optimization with memory and online control with adversarial disturbances. This connection, in turn, leads to a new constant-competitive policy for a rich class of online control problems.
Cite
Text
Shi et al. "Online Optimization with Memory and Competitive Control." Neural Information Processing Systems, 2020.Markdown
[Shi et al. "Online Optimization with Memory and Competitive Control." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/shi2020neurips-online/)BibTeX
@inproceedings{shi2020neurips-online,
title = {{Online Optimization with Memory and Competitive Control}},
author = {Shi, Guanya and Lin, Yiheng and Chung, Soon-Jo and Yue, Yisong and Wierman, Adam},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/shi2020neurips-online/}
}