Classifier Learning with Supervised Marginal Likelihood
Abstract
It has been argued that in supervised classification tasks it may be more sensible to perform model selection with respect to a more focused model selection score, like the supervised (conditional) marginal likelihood, than with respect to the standard unsupervised marginal likelihood criterion. However, for most Bayesian network models, computing the supervised marginal likelihood score takes exponential time with respect to the amount of observed data. In this paper, we consider diagnostic Bayesian network classifters where the significant model parameters represent conditional distributions for the class variable, given the values of the predictor variables, in which case the supervised marginal likelihood can be computed in linear time with respect to the data. As the number of model parameters grows in this case exponentially with respect to the number of predictors, we focus on simple diagnostic modeis where the number of relevant predictors is small, and suggest two approaches for applying this type of models in classification. The first approach is based on mixtures of simple diagnostic models, while in the second approach we apply the small predictor sets of the simple diagnostic models for augmenting the Naive Bayes classifier.
Cite
Text
Kontkanen et al. "Classifier Learning with Supervised Marginal Likelihood." Conference on Uncertainty in Artificial Intelligence, 2001.Markdown
[Kontkanen et al. "Classifier Learning with Supervised Marginal Likelihood." Conference on Uncertainty in Artificial Intelligence, 2001.](https://mlanthology.org/uai/2001/kontkanen2001uai-classifier/)BibTeX
@inproceedings{kontkanen2001uai-classifier,
title = {{Classifier Learning with Supervised Marginal Likelihood}},
author = {Kontkanen, Petri and Myllymäki, Petri and Tirri, Henry},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2001},
pages = {277-284},
url = {https://mlanthology.org/uai/2001/kontkanen2001uai-classifier/}
}