EM’s Convergence in Gaussian Latent Tree Models
Abstract
We study the optimization landscape of the log-likelihood function and the convergence of the Expectation-Maximization (EM) algorithm in latent Gaussian tree models, i.e. tree-structured Gaussian graphical models whose leaf nodes are observable and non-leaf nodes are unobservable. We show that the unique non-trivial stationary point of the population log-likelihood is its global maximum, and establish that the expectation-maximization algorithm is guaranteed to converge to it in the single latent variable case. Our results for the landscape of the log-likelihood function in general latent tree models provide support for the extensive practical use of maximum likelihood based-methods in this setting. Our results for the expectation-maximization algorithm extend an emerging line of work on obtaining global convergence guarantees for this celebrated algorithm. We show our results for the non-trivial stationary points of the log-likelihood by arguing that a certain system of polynomial equations obtained from the EM updates has a unique non-trivial solution. The global convergence of the EM algorithm follows by arguing that all trivial fixed points are higher-order saddle points.
Cite
Text
Dagan et al. "EM’s Convergence in Gaussian Latent Tree Models." Conference on Learning Theory, 2022.Markdown
[Dagan et al. "EM’s Convergence in Gaussian Latent Tree Models." Conference on Learning Theory, 2022.](https://mlanthology.org/colt/2022/dagan2022colt-ems/)BibTeX
@inproceedings{dagan2022colt-ems,
title = {{EM’s Convergence in Gaussian Latent Tree Models}},
author = {Dagan, Yuval and Kandiros, Vardis and Daskalakis, Constantinos},
booktitle = {Conference on Learning Theory},
year = {2022},
pages = {2597-2667},
volume = {178},
url = {https://mlanthology.org/colt/2022/dagan2022colt-ems/}
}