Robustness and Generalization
Abstract
We derive generalization bounds for learning algorithms based on their robustness: the property that if a testing sample is "similar" to a training sample, then the testing error is close to the training error. This provides a novel approach, different from the complexity or stability arguments, to study generalization of learning algorithms. We further show that a weak notion of robustness is both sufficient and necessary for generalizability, which implies that robustness is a fundamental property for learning algorithms to work.
Cite
Text
Xu and Mannor. "Robustness and Generalization." Annual Conference on Computational Learning Theory, 2010.Markdown
[Xu and Mannor. "Robustness and Generalization." Annual Conference on Computational Learning Theory, 2010.](https://mlanthology.org/colt/2010/xu2010colt-robustness/)BibTeX
@inproceedings{xu2010colt-robustness,
title = {{Robustness and Generalization}},
author = {Xu, Huan and Mannor, Shie},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2010},
pages = {503-515},
url = {https://mlanthology.org/colt/2010/xu2010colt-robustness/}
}