Convex Two-Layer Modeling with Latent Structure

Abstract

Unsupervised learning of structured predictors has been a long standing pursuit in machine learning. Recently a conditional random field auto-encoder has been proposed in a two-layer setting, allowing latent structured representation to be automatically inferred. Aside from being nonconvex, it also requires the demanding inference of normalization. In this paper, we develop a convex relaxation of two-layer conditional model which captures latent structure and estimates model parameters, jointly and optimally. We further expand its applicability by resorting to a weaker form of inference---maximum a-posteriori. The flexibility of the model is demonstrated on two structures based on total unimodularity---graph matching and linear chain. Experimental results confirm the promise of the method.

Cite

Text

Ganapathiraman et al. "Convex Two-Layer Modeling with Latent Structure." Neural Information Processing Systems, 2016.

Markdown

[Ganapathiraman et al. "Convex Two-Layer Modeling with Latent Structure." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/ganapathiraman2016neurips-convex/)

BibTeX

@inproceedings{ganapathiraman2016neurips-convex,
  title     = {{Convex Two-Layer Modeling with Latent Structure}},
  author    = {Ganapathiraman, Vignesh and Zhang, Xinhua and Yu, Yaoliang and Wen, Junfeng},
  booktitle = {Neural Information Processing Systems},
  year      = {2016},
  pages     = {1280-1288},
  url       = {https://mlanthology.org/neurips/2016/ganapathiraman2016neurips-convex/}
}