More Data Means Less Inference: A Pseudo-Max Approach to Structured Learning
Abstract
The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference in this setting are intractable. Here we show that it is possible to circumvent this difficulty when the input distribution is rich enough via a method similar in spirit to pseudo-likelihood. We show how our new method achieves consistency, and illustrate empirically that it indeed performs as well as exact methods when sufficiently large training sets are used.
Cite
Text
Sontag et al. "More Data Means Less Inference: A Pseudo-Max Approach to Structured Learning." Neural Information Processing Systems, 2010.Markdown
[Sontag et al. "More Data Means Less Inference: A Pseudo-Max Approach to Structured Learning." Neural Information Processing Systems, 2010.](https://mlanthology.org/neurips/2010/sontag2010neurips-more/)BibTeX
@inproceedings{sontag2010neurips-more,
title = {{More Data Means Less Inference: A Pseudo-Max Approach to Structured Learning}},
author = {Sontag, David and Meshi, Ofer and Globerson, Amir and Jaakkola, Tommi S.},
booktitle = {Neural Information Processing Systems},
year = {2010},
pages = {2181-2189},
url = {https://mlanthology.org/neurips/2010/sontag2010neurips-more/}
}