NYTRO: When Subsampling Meets Early Stopping

Abstract

Early stopping is a well known approach to reduce the time complexity for performing training and model selection of large scale learning machines. On the other hand, memory/space (rather than time) complexity is the main constraint in many applications, and randomized subsampling techniques have been proposed to tackle this issue. In this paper we ask whether early stopping and subsampling ideas can be combined in a fruitful way. We consider the question in a least squares regression setting and propose a form of randomized iterative regularization based on early stopping and subsampling. In this context, we analyze the statistical and computational properties of the proposed method. Theoretical results are complemented and validated by a thorough experimental analysis.

Cite

Text

Camoriano et al. "NYTRO: When Subsampling Meets Early Stopping." International Conference on Artificial Intelligence and Statistics, 2016.

Markdown

[Camoriano et al. "NYTRO: When Subsampling Meets Early Stopping." International Conference on Artificial Intelligence and Statistics, 2016.](https://mlanthology.org/aistats/2016/camoriano2016aistats-nytro/)

BibTeX

@inproceedings{camoriano2016aistats-nytro,
  title     = {{NYTRO: When Subsampling Meets Early Stopping}},
  author    = {Camoriano, Raffaello and Angles, Tomás and Rudi, Alessandro and Rosasco, Lorenzo},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2016},
  pages     = {1403-1411},
  url       = {https://mlanthology.org/aistats/2016/camoriano2016aistats-nytro/}
}