Efficiently Approximating Weighted Sums with Exponentially Many Terms
Abstract
We explore applications of Markov chain Monte Carlo methods for weight estimation over inputs to the Weighted Majority (WM) and Winnow algorithms. This is useful when there are exponentially many such inputs and no apparent means to efficiently compute their weighted sum. The applications we examine are pruning classifier ensembles using WM and learning general DNF formulas using Winnow. These uses require exponentially many inputs, so we define Markov chains over the inputs to approximate the weighted sums. We state performance guarantees for our algorithms and present preliminary empirical results.
Cite
Text
Chawla et al. "Efficiently Approximating Weighted Sums with Exponentially Many Terms." Annual Conference on Computational Learning Theory, 2001. doi:10.1007/3-540-44581-1_6Markdown
[Chawla et al. "Efficiently Approximating Weighted Sums with Exponentially Many Terms." Annual Conference on Computational Learning Theory, 2001.](https://mlanthology.org/colt/2001/chawla2001colt-efficiently/) doi:10.1007/3-540-44581-1_6BibTeX
@inproceedings{chawla2001colt-efficiently,
title = {{Efficiently Approximating Weighted Sums with Exponentially Many Terms}},
author = {Chawla, Deepak and Li, Lin and Scott, Stephen},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2001},
pages = {82-98},
doi = {10.1007/3-540-44581-1_6},
url = {https://mlanthology.org/colt/2001/chawla2001colt-efficiently/}
}