Localized Complexities for Transductive Learning

Abstract

We show two novel concentration inequalities for suprema of empirical processes when sampling without replacement, which both take the variance of the functions into account. While these inequalities may potentially have broad applications in learning theory in general, we exemplify their significance by studying the transductive setting of learning theory. For which we provide the first excess risk bounds based on the localized complexity of the hypothesis class, which can yield fast rates of convergence also in the transductive learning setting. We give a preliminary analysis of the localized complexities for the prominent case of kernel classes.

Cite

Text

Tolstikhin et al. "Localized Complexities for Transductive Learning." Annual Conference on Computational Learning Theory, 2014.

Markdown

[Tolstikhin et al. "Localized Complexities for Transductive Learning." Annual Conference on Computational Learning Theory, 2014.](https://mlanthology.org/colt/2014/tolstikhin2014colt-localized/)

BibTeX

@inproceedings{tolstikhin2014colt-localized,
  title     = {{Localized Complexities for Transductive Learning}},
  author    = {Tolstikhin, Ilya O. and Blanchard, Gilles and Kloft, Marius},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2014},
  pages     = {857-884},
  url       = {https://mlanthology.org/colt/2014/tolstikhin2014colt-localized/}
}