New Analysis and Algorithm for Learning with Drifting Distributions

Abstract

We present a new analysis of the problem of learning with drifting distributions in the batch setting using the notion of discrepancy. We prove learning bounds based on the Rademacher complexity of the hypothesis set and the discrepancy of distributions both for a drifting PAC scenario and a tracking scenario. Our bounds are always tighter and in some cases substantially improve upon previous ones based on the L1 distance. We also present a generalization of the standard on-line to batch conversion to the drifting scenario in terms of the discrepancy and arbitrary convex combinations of hypotheses. We introduce a new algorithm exploiting these learning guarantees, which we show can be formulated as a simple QP. Finally, we report the results of preliminary experiments demonstrating the benefits of this algorithm.

Cite

Text

Mohri and Medina. "New Analysis and Algorithm for Learning with Drifting Distributions." International Conference on Algorithmic Learning Theory, 2012. doi:10.1007/978-3-642-34106-9_13

Markdown

[Mohri and Medina. "New Analysis and Algorithm for Learning with Drifting Distributions." International Conference on Algorithmic Learning Theory, 2012.](https://mlanthology.org/alt/2012/mohri2012alt-new/) doi:10.1007/978-3-642-34106-9_13

BibTeX

@inproceedings{mohri2012alt-new,
  title     = {{New Analysis and Algorithm for Learning with Drifting Distributions}},
  author    = {Mohri, Mehryar and Medina, Andres Muñoz},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2012},
  pages     = {124-138},
  doi       = {10.1007/978-3-642-34106-9_13},
  url       = {https://mlanthology.org/alt/2012/mohri2012alt-new/}
}