Near-Optimal Bounds for Cross-Validation via Loss Stability
Abstract
Multi-fold cross-validation is an established practice to estimate the error rate of a learning algorithm. Quantifying the variance reduction gains due to cross-validation has been challenging due to the inherent correlations introduced by the folds. In this work we introduce a new and weak measure of stability called \emphloss stability and relate the cross-validation performance to loss stability; we also establish that this relationship is near-optimal. Our work thus quantitatively improves the current best bounds on cross-validation.
Cite
Text
Kumar et al. "Near-Optimal Bounds for Cross-Validation via Loss Stability." International Conference on Machine Learning, 2013.Markdown
[Kumar et al. "Near-Optimal Bounds for Cross-Validation via Loss Stability." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/kumar2013icml-nearoptimal/)BibTeX
@inproceedings{kumar2013icml-nearoptimal,
title = {{Near-Optimal Bounds for Cross-Validation via Loss Stability}},
author = {Kumar, Ravi and Lokshtanov, Daniel and Vassilvitskii, Sergei and Vattani, Andrea},
booktitle = {International Conference on Machine Learning},
year = {2013},
pages = {27-35},
volume = {28},
url = {https://mlanthology.org/icml/2013/kumar2013icml-nearoptimal/}
}