Data Dependent Concentration Bounds for Sequential Prediction Algorithms
Abstract
We investigate the generalization behavior of sequential prediction (online) algorithms, when data are generated from a probability distribution. Using some newly developed probability inequalities, we are able to bound the total generalization performance of a learning algorithm in terms of its observed total loss. Consequences of this analysis will be illustrated with examples.
Cite
Text
Zhang. "Data Dependent Concentration Bounds for Sequential Prediction Algorithms." Annual Conference on Computational Learning Theory, 2005. doi:10.1007/11503415_12Markdown
[Zhang. "Data Dependent Concentration Bounds for Sequential Prediction Algorithms." Annual Conference on Computational Learning Theory, 2005.](https://mlanthology.org/colt/2005/zhang2005colt-data/) doi:10.1007/11503415_12BibTeX
@inproceedings{zhang2005colt-data,
title = {{Data Dependent Concentration Bounds for Sequential Prediction Algorithms}},
author = {Zhang, Tong},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2005},
pages = {173-187},
doi = {10.1007/11503415_12},
url = {https://mlanthology.org/colt/2005/zhang2005colt-data/}
}