An Online Algorithm for Smoothed Regression and LQR Control
Abstract
We consider Online Convex Optimization (OCO) in the setting where the costs are $m$-strongly convex and the online learner pays a switching cost for changing decisions between rounds. We show that the recently proposed Online Balanced Descent (OBD) algorithm is constant competitive in this setting, with competitive ratio $3 + O(1/m)$, irrespective of the ambient dimension. Additionally, we show that when the sequence of cost functions is $\epsilon$-smooth, OBD has near-optimal dynamic regret and maintains strong per-round accuracy. We demonstrate the generality of our approach by showing that the OBD framework can be used to construct competitive algorithms for a variety of online problems across learning and control, including online variants of ridge regression, logistic regression, maximum likelihood estimation, and LQR control.
Cite
Text
Goel and Wierman. "An Online Algorithm for Smoothed Regression and LQR Control." Artificial Intelligence and Statistics, 2019.Markdown
[Goel and Wierman. "An Online Algorithm for Smoothed Regression and LQR Control." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/goel2019aistats-online/)BibTeX
@inproceedings{goel2019aistats-online,
title = {{An Online Algorithm for Smoothed Regression and LQR Control}},
author = {Goel, Gautam and Wierman, Adam},
booktitle = {Artificial Intelligence and Statistics},
year = {2019},
pages = {2504-2513},
volume = {89},
url = {https://mlanthology.org/aistats/2019/goel2019aistats-online/}
}