EM Converges for a Mixture of Many Linear Regressions
Abstract
We study the convergence of the Expectation-Maximization (EM) algorithm for mixtures of linear regressions with an arbitrary number $k$ of components. We show that as long as signal-to-noise ratio (SNR) is $\tilde{\Omega}(k)$, well-initialized EM converges to the true regression parameters. Previous results for $k \geq 3$ have only established local convergence for the noiseless setting, i.e., where SNR is infinitely large. Our results enlarge the scope to the environment with noises, and notably, we establish a statistical error rate that is independent of the norm (or pairwise distance) of the regression parameters. In particular, our results imply exact recovery as $\sigma \rightarrow 0$, in contrast to most previous local convergence results for EM, where the statistical error scaled with the norm of parameters. Standard moment-method approaches may be applied to guarantee we are in the region where our local convergence guarantees apply.
Cite
Text
Kwon and Caramanis. "EM Converges for a Mixture of Many Linear Regressions." Artificial Intelligence and Statistics, 2020.Markdown
[Kwon and Caramanis. "EM Converges for a Mixture of Many Linear Regressions." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/kwon2020aistats-em/)BibTeX
@inproceedings{kwon2020aistats-em,
title = {{EM Converges for a Mixture of Many Linear Regressions}},
author = {Kwon, Jeongyeol and Caramanis, Constantine},
booktitle = {Artificial Intelligence and Statistics},
year = {2020},
pages = {1727-1736},
volume = {108},
url = {https://mlanthology.org/aistats/2020/kwon2020aistats-em/}
}