Improved Bounds About On-Line Learning of Smooth Functions of a Single Variable
Abstract
We consider the complexity of learning classes of smooth functions formed by bounding different norms of a function's derivative. The learning model is the generalization of the mistake-bound model to continuous-valued functions. Suppose F_q is the set of all absolutely continuous functions f from [0, 1] to R such that ∥f′∥_q ≤1, and opt(F_q, m ) is the best possible bound on the worst-case sum of absolute prediction errors over sequences of m trials. We show that for all q ≥ 2, opt(F_q, m )=Θ(√log m ), and that opt(F_2, m)=√log m/2 ±O(1).
Cite
Text
Long. "Improved Bounds About On-Line Learning of Smooth Functions of a Single Variable." International Conference on Algorithmic Learning Theory, 1996. doi:10.1007/3-540-61863-5_31Markdown
[Long. "Improved Bounds About On-Line Learning of Smooth Functions of a Single Variable." International Conference on Algorithmic Learning Theory, 1996.](https://mlanthology.org/alt/1996/long1996alt-improved/) doi:10.1007/3-540-61863-5_31BibTeX
@inproceedings{long1996alt-improved,
title = {{Improved Bounds About On-Line Learning of Smooth Functions of a Single Variable}},
author = {Long, Philip M.},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {1996},
pages = {26-36},
doi = {10.1007/3-540-61863-5_31},
url = {https://mlanthology.org/alt/1996/long1996alt-improved/}
}