Learning Linear Models Using Distributed Iterative Hessian Sketching
Abstract
This work considers the problem of learning the Markov parameters of a linear system from observed data. Recent non-asymptotic system identification results have characterized the sample complexity of this problem in the single and multi-rollout setting. In both instances, the number of samples required in order to obtain acceptable estimates can produce optimization problems with an intractably large number of decision variables for a second-order algorithm. We show that a randomized and distributed Newton algorithm based on Hessian-sketching can produce $\epsilon$-optimal solutions and converges geometrically. Moreover, the algorithm is trivially parallelizable. Our results hold for a variety of sketching matrices and we illustrate the theory with numerical examples.
Cite
Text
Wang and Anderson. "Learning Linear Models Using Distributed Iterative Hessian Sketching." Proceedings of The 4th Annual Learning for Dynamics and Control Conference, 2022.Markdown
[Wang and Anderson. "Learning Linear Models Using Distributed Iterative Hessian Sketching." Proceedings of The 4th Annual Learning for Dynamics and Control Conference, 2022.](https://mlanthology.org/l4dc/2022/wang2022l4dc-learning/)BibTeX
@inproceedings{wang2022l4dc-learning,
title = {{Learning Linear Models Using Distributed Iterative Hessian Sketching}},
author = {Wang, Han and Anderson, James},
booktitle = {Proceedings of The 4th Annual Learning for Dynamics and Control Conference},
year = {2022},
pages = {427-440},
volume = {168},
url = {https://mlanthology.org/l4dc/2022/wang2022l4dc-learning/}
}