The Tradeoffs of Large Scale Learning
Abstract
This contribution develops a theoretical framework that takes into account the effect of approximate optimization on learning algorithms. The analysis shows distinct tradeoffs for the case of small-scale and large-scale learning problems. Small-scale learning problems are subject to the usual approximation--estimation tradeoff. Large-scale learning problems are subject to a qualitatively different tradeoff involving the computational complexity of the underlying optimization algorithms in non-trivial ways.
Cite
Text
Bottou and Bousquet. "The Tradeoffs of Large Scale Learning." Neural Information Processing Systems, 2007.Markdown
[Bottou and Bousquet. "The Tradeoffs of Large Scale Learning." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/bottou2007neurips-tradeoffs/)BibTeX
@inproceedings{bottou2007neurips-tradeoffs,
title = {{The Tradeoffs of Large Scale Learning}},
author = {Bottou, Léon and Bousquet, Olivier},
booktitle = {Neural Information Processing Systems},
year = {2007},
pages = {161-168},
url = {https://mlanthology.org/neurips/2007/bottou2007neurips-tradeoffs/}
}