Latent Classification Models
Abstract
One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. These models rely on two assumptions: (i) All the attributes used to describe an instance are conditionally independent given the class of that instance, and (ii) all attributes follow a specific parametric family of distributions. In this paper we propose a new set of models for classification in continuous domains, termed latent classification models. The latent classification model can roughly be seen as combining the Naïve Bayes model with a mixture of factor analyzers, thereby relaxing the assumptions of the Naïve Bayes classifier. In the proposed model the continuous attributes are described by a mixture of multivariate Gaussians, where the conditional dependencies among the attributes are encoded using latent variables. We present algorithms for learning both the parameters and the structure of a latent classification model, and we demonstrate empirically that the accuracy of the proposed model is significantly higher than the accuracy of other probabilistic classifiers.
Cite
Text
Langseth and Nielsen. "Latent Classification Models." Machine Learning, 2005. doi:10.1007/S10994-005-0472-5Markdown
[Langseth and Nielsen. "Latent Classification Models." Machine Learning, 2005.](https://mlanthology.org/mlj/2005/langseth2005mlj-latent/) doi:10.1007/S10994-005-0472-5BibTeX
@article{langseth2005mlj-latent,
title = {{Latent Classification Models}},
author = {Langseth, Helge and Nielsen, Thomas D.},
journal = {Machine Learning},
year = {2005},
pages = {237-265},
doi = {10.1007/S10994-005-0472-5},
volume = {59},
url = {https://mlanthology.org/mlj/2005/langseth2005mlj-latent/}
}