How Local Should a Learning Method Be?
Abstract
We consider the question of why modern machine learning methods like support vector machines outperform earlier non-parametric techniques like k- NN. Our approach investigates the locality of learning methods, i.e., the tendency to focus mainly on the close-by part of the training set when constructing a new guess at a particular location. We show that, on the one hand, we can expect all consistent learning methods to be local in some sense; hence if we consider consistency a desirable property then a degree of locality is unavoidable. On the other hand, we also claim that earlier methods like k-NN are local in a more strict manner which implies performance limitations. Thus, we argue that a degree of locality is necessary but that this should not be overdone. Support vector machines and related techniques strike a good balance in this matter, which we suggest may partially explain their good performance in practice.
Cite
Text
Zakai and Ritov. "How Local Should a Learning Method Be?." Annual Conference on Computational Learning Theory, 2008.Markdown
[Zakai and Ritov. "How Local Should a Learning Method Be?." Annual Conference on Computational Learning Theory, 2008.](https://mlanthology.org/colt/2008/zakai2008colt-local/)BibTeX
@inproceedings{zakai2008colt-local,
title = {{How Local Should a Learning Method Be?}},
author = {Zakai, Alon and Ritov, Yaacov},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2008},
pages = {205-216},
url = {https://mlanthology.org/colt/2008/zakai2008colt-local/}
}