Boosting Density Function Estimators
Abstract
In this paper, we focus on the adaptation of boosting to density function estimation, useful in a number of fields including Natural Language Processing and Computational Biology. Previously, boosting has been used to optimize classification algorithms, improving generalization accuracy by combining many classifiers. The core of the boosting strategy, in the well-known Adaboost algorithm [ 4 ], consists in updating the learning instance distribution, increasing (resp. decreasing) the weight of misclassified (resp. correctly classified) examples by the current classifier. Except in [ 17 ], [ 18 ], few works have attempted to exploit interesting theoretical properties of boosting (such as margin maximization ) independently of a classification task. In this paper, we do not take into account classification errors to optimize a classifier, but rather density estimation errors to optimize an estimator (here a probabilistic automaton) of a given target density. Experimental results are presented showing the interest of our approach.
Cite
Text
Thollard et al. "Boosting Density Function Estimators." European Conference on Machine Learning, 2002. doi:10.1007/3-540-36755-1_36Markdown
[Thollard et al. "Boosting Density Function Estimators." European Conference on Machine Learning, 2002.](https://mlanthology.org/ecmlpkdd/2002/thollard2002ecml-boosting/) doi:10.1007/3-540-36755-1_36BibTeX
@inproceedings{thollard2002ecml-boosting,
title = {{Boosting Density Function Estimators}},
author = {Thollard, Franck and Sebban, Marc and Ézéquel, Philippe},
booktitle = {European Conference on Machine Learning},
year = {2002},
pages = {431-443},
doi = {10.1007/3-540-36755-1_36},
url = {https://mlanthology.org/ecmlpkdd/2002/thollard2002ecml-boosting/}
}