Multi-Conditional Learning: Generative/Discriminative Training for Clustering and Classification
Abstract
This paper multi-conditional learning (MCL) a training criterion based on a product of multiple conditional likelihoods. When combining the traditional conditional probability of "label given input" with a generative probability of "input given label" the later acts as a surprisingly effective regularizer. When applied to models with latent variables, MCL combines the structure-discovery capabilities of generative topic models, such as latent Dirichlet allocation and the exponential family harmonium, with the accuracy and robustness of discriminative classifiers, such as logistic regression and conditional random fields. We present results on several standard text data sets showing significant reductions in classification error due to MCL regularization, and substantial gains in precision and recall due to the latent structure discovered under MCL.
Cite
Text
McCallum et al. "Multi-Conditional Learning: Generative/Discriminative Training for Clustering and Classification." AAAI Conference on Artificial Intelligence, 2006.Markdown
[McCallum et al. "Multi-Conditional Learning: Generative/Discriminative Training for Clustering and Classification." AAAI Conference on Artificial Intelligence, 2006.](https://mlanthology.org/aaai/2006/mccallum2006aaai-multi/)BibTeX
@inproceedings{mccallum2006aaai-multi,
title = {{Multi-Conditional Learning: Generative/Discriminative Training for Clustering and Classification}},
author = {McCallum, Andrew and Pal, Chris and Druck, Gregory and Wang, Xuerui},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2006},
pages = {433-439},
url = {https://mlanthology.org/aaai/2006/mccallum2006aaai-multi/}
}