Agreement-Based Learning
Abstract
The learning of probabilistic models with many hidden variables and non- decomposable dependencies is an important and challenging problem. In contrast to traditional approaches based on approximate inference in a single intractable model, our approach is to train a set of tractable submodels by encouraging them to agree on the hidden variables. This allows us to capture non-decomposable aspects of the data while still maintaining tractability. We propose an objective function for our approach, derive EM-style algorithms for parameter estimation, and demonstrate their effectiveness on three challenging real-world learning tasks.
Cite
Text
Liang et al. "Agreement-Based Learning." Neural Information Processing Systems, 2007.Markdown
[Liang et al. "Agreement-Based Learning." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/liang2007neurips-agreementbased/)BibTeX
@inproceedings{liang2007neurips-agreementbased,
title = {{Agreement-Based Learning}},
author = {Liang, Percy and Klein, Dan and Jordan, Michael I.},
booktitle = {Neural Information Processing Systems},
year = {2007},
pages = {913-920},
url = {https://mlanthology.org/neurips/2007/liang2007neurips-agreementbased/}
}