On Computable Online Learning

Abstract

We initiate the first study of computable online (c-online) learning, which we analyse under varying requirements for “optimality” in terms of the mistake bound. Our main contribution is to give a necessary and sufficient condition for optimal c-online learning and show that the Littlestone dimension no longer characterizes the optimal mistake bound of c-online learning. Furthermore, we introduce anytime optimal (a-optimal) online learning, a more natural conceptualization of “optimality” and a generalization of Littlestone’s Standard Optimal Algorithm. We show the existence of a computational separation between a-optimal and optimal online learning, proving that a-optimal online learning is computationally more difficult. Finally, we consider online learning with no requirements for optimality, and show, under a weaker notion of computability, that the finiteness of the Littlestone dimension no longer characterizes whether a class is c-online learnable with finite mistake bound. A potential avenue for strengthening this result is suggested by exploring the relationship between c-online and CPAC learning, where we show that c-online learning is as difficult as improper CPAC learning.

Cite

Text

Hasrati and Ben-David. "On Computable Online Learning." Proceedings of The 34th International Conference on Algorithmic Learning Theory, 2023.

Markdown

[Hasrati and Ben-David. "On Computable Online Learning." Proceedings of The 34th International Conference on Algorithmic Learning Theory, 2023.](https://mlanthology.org/alt/2023/hasrati2023alt-computable/)

BibTeX

@inproceedings{hasrati2023alt-computable,
  title     = {{On Computable Online Learning}},
  author    = {Hasrati, Niki and Ben-David, Shai},
  booktitle = {Proceedings of The 34th International Conference on Algorithmic Learning Theory},
  year      = {2023},
  pages     = {707-725},
  volume    = {201},
  url       = {https://mlanthology.org/alt/2023/hasrati2023alt-computable/}
}