Linear Convergence of Gradient Methods for Estimating Structured Transition Matrices in High-Dimensional Vector Autoregressive Models
Abstract
In this paper, we present non-asymptotic optimization guarantees of gradient descent methods for estimating structured transition matrices in high-dimensional vector autoregressive (VAR) models. We adopt the projected gradient descent (PGD) for single-structured transition matrices and the alternating projected gradient descent (AltPGD) for superposition-structured ones. Our analysis demonstrates that both gradient algorithms converge linearly to the statistical error even though the strong convexity of the objective function is absent under the high-dimensional settings. Moreover our result is sharp (up to a constant factor) in the sense of matching the phase transition theory of the corresponding model with independent samples. To the best of our knowledge, this analysis constitutes first non-asymptotic optimization guarantees of the linear rate for regularized estimation in high-dimensional VAR models. Numerical results are provided to support our theoretical analysis.
Cite
Text
Lv et al. "Linear Convergence of Gradient Methods for Estimating Structured Transition Matrices in High-Dimensional Vector Autoregressive Models." Neural Information Processing Systems, 2021.Markdown
[Lv et al. "Linear Convergence of Gradient Methods for Estimating Structured Transition Matrices in High-Dimensional Vector Autoregressive Models." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/lv2021neurips-linear/)BibTeX
@inproceedings{lv2021neurips-linear,
title = {{Linear Convergence of Gradient Methods for Estimating Structured Transition Matrices in High-Dimensional Vector Autoregressive Models}},
author = {Lv, Xiao and Cui, Wei and Liu, Yulong},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/lv2021neurips-linear/}
}