Convex Relaxations of Latent Variable Training

Abstract

We investigate a new, convex relaxation of an expectation-maximization (EM) variant that approximates a standard objective while eliminating local minima. First, a cautionary result is presented, showing that any convex relaxation of EM over hidden variables must give trivial results if any dependence on the missing values is retained. Although this appears to be a strong negative outcome, we then demonstrate how the problem can be bypassed by using equivalence relations instead of value assignments over hidden variables. In particular, we develop new algorithms for estimating exponential conditional models that only require equivalence relation information over the variable values. This reformulation leads to an exact expression for EM variants in a wide range of problems. We then develop a semidefinite relaxation that yields global training by eliminating local minima.

Cite

Text

Guo and Schuurmans. "Convex Relaxations of Latent Variable Training." Neural Information Processing Systems, 2007.

Markdown

[Guo and Schuurmans. "Convex Relaxations of Latent Variable Training." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/guo2007neurips-convex/)

BibTeX

@inproceedings{guo2007neurips-convex,
  title     = {{Convex Relaxations of Latent Variable Training}},
  author    = {Guo, Yuhong and Schuurmans, Dale},
  booktitle = {Neural Information Processing Systems},
  year      = {2007},
  pages     = {601-608},
  url       = {https://mlanthology.org/neurips/2007/guo2007neurips-convex/}
}