Using Additive Expert Ensembles to Cope with Concept Drift
Abstract
We consider online learning where the target concept can change over time. Previous work on expert prediction algorithms has bounded the worst-case performance on any subsequence of the training data relative to the performance of the best expert. However, because these "experts" may be difficult to implement, we take a more general approach and bound performance relative to the actual performance of any online learner on this single subsequence. We present the additive expert ensemble algorithm AddExp, a new, general method for using any online learner for drifting concepts. We adapt techniques for analyzing expert prediction algorithms to prove mistake and loss bounds for a discrete and a continuous version of AddExp. Finally, we present pruning methods and empirical results for data sets with concept drift.
Cite
Text
Kolter and Maloof. "Using Additive Expert Ensembles to Cope with Concept Drift." International Conference on Machine Learning, 2005. doi:10.1145/1102351.1102408Markdown
[Kolter and Maloof. "Using Additive Expert Ensembles to Cope with Concept Drift." International Conference on Machine Learning, 2005.](https://mlanthology.org/icml/2005/kolter2005icml-using/) doi:10.1145/1102351.1102408BibTeX
@inproceedings{kolter2005icml-using,
title = {{Using Additive Expert Ensembles to Cope with Concept Drift}},
author = {Kolter, Jeremy Z. and Maloof, Marcus A.},
booktitle = {International Conference on Machine Learning},
year = {2005},
pages = {449-456},
doi = {10.1145/1102351.1102408},
url = {https://mlanthology.org/icml/2005/kolter2005icml-using/}
}