Robust Guarantees for Learning an Autoregressive Filter

Abstract

The optimal predictor for a known linear dynamical system (with hidden state and Gaussian noise) takes the form of an autoregressive linear filter, namely the Kalman filter. However, making optimal predictions in an unknown linear dynamical system is a more challenging problem that is fundamental to control theory and reinforcement learning. To this end, we take the approach of directly learning an autoregressive filter for time-series prediction under unknown dynamics. Our analysis differs from previous statistical analyses in that we regress not only on the inputs to the dynamical system, but also the outputs, which is essential to dealing with process noise. The main challenge is to estimate the filter under worst case input (in $\mathcal H_\infty$ norm), for which we use an $L^\infty$-based objective rather than ordinary least-squares. For learning an autoregressive model, our algorithm has optimal sample complexity in terms of the rollout length, which does not seem to be attained by naive least-squares.

Cite

Text

Lee and Zhang. "Robust Guarantees for Learning an Autoregressive Filter." Proceedings of the 31st International Conference  on Algorithmic Learning Theory, 2020.

Markdown

[Lee and Zhang. "Robust Guarantees for Learning an Autoregressive Filter." Proceedings of the 31st International Conference  on Algorithmic Learning Theory, 2020.](https://mlanthology.org/alt/2020/lee2020alt-robust/)

BibTeX

@inproceedings{lee2020alt-robust,
  title     = {{Robust Guarantees for Learning an Autoregressive Filter}},
  author    = {Lee, Holden and Zhang, Cyril},
  booktitle = {Proceedings of the 31st International Conference  on Algorithmic Learning Theory},
  year      = {2020},
  pages     = {490-517},
  volume    = {117},
  url       = {https://mlanthology.org/alt/2020/lee2020alt-robust/}
}