A Comparative Evaluation of Voting and Meta-Learning on Partitioned Data

Abstract

Much of the research in inductive learning concentrates on problems with relatively small amounts of data. With the coming age of very large network computing, it is likely that orders of magnitude more data in databases will be available for various learning problems of real world importance. Some learning algorithms assume that the entire data set fits into main memory, which is not feasible for massive amounts of data. One approach to handling a large data set is to partition the data set into subsets, run the learning algorithm on each of the subsets, and combine the results. In this paper we evaluate different techniques for learning from partitioned data. Our meta-learning approach is empirically compared with techniques in the literature that aim to combine multiple evidence to arrive at one prediction.

Cite

Text

Chan and Stolfo. "A Comparative Evaluation of Voting and Meta-Learning on Partitioned Data." International Conference on Machine Learning, 1995. doi:10.1016/B978-1-55860-377-6.50020-7

Markdown

[Chan and Stolfo. "A Comparative Evaluation of Voting and Meta-Learning on Partitioned Data." International Conference on Machine Learning, 1995.](https://mlanthology.org/icml/1995/chan1995icml-comparative/) doi:10.1016/B978-1-55860-377-6.50020-7

BibTeX

@inproceedings{chan1995icml-comparative,
  title     = {{A Comparative Evaluation of Voting and Meta-Learning on Partitioned Data}},
  author    = {Chan, Philip K. and Stolfo, Salvatore J.},
  booktitle = {International Conference on Machine Learning},
  year      = {1995},
  pages     = {90-98},
  doi       = {10.1016/B978-1-55860-377-6.50020-7},
  url       = {https://mlanthology.org/icml/1995/chan1995icml-comparative/}
}