Online Estimation and Control with Optimal Pathlength Regret
Abstract
A natural goal when designing online learning algorithms for non-stationary environments is to bound the regret of the algorithm in terms of the temporal variation of the input sequence. Intuitively, when the variation is small, it should be easier for the algorithm to achieve low regret, since past observations are predictive of future inputs. Such data-dependent "pathlength" regret bounds have recently been obtained for a wide variety of online learning problems, including online convex optimization (OCO) and bandits. We obtain the first pathlength regret bounds for online control and estimation (e.g. Kalman filtering) in linear dynamical systems. The key idea in our derivation is to reduce pathlength-optimal filtering and control to certain variational problems in robust estimation and control; these reductions may be of independent interest. Numerical simulations confirm that our pathlength-optimal algorithms outperform traditional H-2 and H-infinity algorithms when the environment varies over time.
Cite
Text
Goel and Hassibi. "Online Estimation and Control with Optimal Pathlength Regret." Proceedings of The 4th Annual Learning for Dynamics and Control Conference, 2022.Markdown
[Goel and Hassibi. "Online Estimation and Control with Optimal Pathlength Regret." Proceedings of The 4th Annual Learning for Dynamics and Control Conference, 2022.](https://mlanthology.org/l4dc/2022/goel2022l4dc-online/)BibTeX
@inproceedings{goel2022l4dc-online,
title = {{Online Estimation and Control with Optimal Pathlength Regret}},
author = {Goel, Gautam and Hassibi, Babak},
booktitle = {Proceedings of The 4th Annual Learning for Dynamics and Control Conference},
year = {2022},
pages = {404-414},
volume = {168},
url = {https://mlanthology.org/l4dc/2022/goel2022l4dc-online/}
}