A Sequential Approximation Bound for Some Sample-Dependent Convex Optimization Problems with Applications in Learning

Abstract

In this paper, we study a class of sample dependent convex optimization problems, and derive a general sequential approximation bound for their solutions. This analysis is closely related to the regret bound framework in online learning. However we apply it to batch learning algorithms instead of online stochastic gradient decent methods. Applications of this analysis in some classification and regression problems will be illustrated.

Cite

Text

Zhang. "A Sequential Approximation Bound for Some Sample-Dependent Convex Optimization Problems with Applications in Learning." Annual Conference on Computational Learning Theory, 2001. doi:10.1007/3-540-44581-1_5

Markdown

[Zhang. "A Sequential Approximation Bound for Some Sample-Dependent Convex Optimization Problems with Applications in Learning." Annual Conference on Computational Learning Theory, 2001.](https://mlanthology.org/colt/2001/zhang2001colt-sequential/) doi:10.1007/3-540-44581-1_5

BibTeX

@inproceedings{zhang2001colt-sequential,
  title     = {{A Sequential Approximation Bound for Some Sample-Dependent Convex Optimization Problems with Applications in Learning}},
  author    = {Zhang, Tong},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2001},
  pages     = {65-81},
  doi       = {10.1007/3-540-44581-1_5},
  url       = {https://mlanthology.org/colt/2001/zhang2001colt-sequential/}
}