Theoretical Analysis of a Class of Randomized Regularization Methods
Abstract
The convergence behavior of traditional learning algorithms can be analyzed in the VC theoretical framework. Recently, many researchers have been interested in a class of randomized learning algorithms including the Gibbs algorithm from statistical mechanics. However, no successful theory concerning the generalization behavior of these randomized learning algorithms have been established previously. In order to fully understand the behavior of these randomized estimators, we shall compare them with regularization schemes for deterministic estimators. Furthermore, we present theoretical analysis for such algorithms which leads to rigorous convergence bounds.
Cite
Text
Zhang. "Theoretical Analysis of a Class of Randomized Regularization Methods." Annual Conference on Computational Learning Theory, 1999. doi:10.1145/307400.307433Markdown
[Zhang. "Theoretical Analysis of a Class of Randomized Regularization Methods." Annual Conference on Computational Learning Theory, 1999.](https://mlanthology.org/colt/1999/zhang1999colt-theoretical/) doi:10.1145/307400.307433BibTeX
@inproceedings{zhang1999colt-theoretical,
title = {{Theoretical Analysis of a Class of Randomized Regularization Methods}},
author = {Zhang, Tong},
booktitle = {Annual Conference on Computational Learning Theory},
year = {1999},
pages = {156-163},
doi = {10.1145/307400.307433},
url = {https://mlanthology.org/colt/1999/zhang1999colt-theoretical/}
}