A Unified Theory of Supervised Online Learnability

Abstract

We study the online learnability of hypothesis classes with respect to arbitrary, but bounded loss functions. No characterization of online learnability is known at this level of generality. In this paper, we close this gap by showing that existing techniques can be used to characterize any online learning problem with a bounded loss function. Along the way, we give a new scale-sensitive combinatorial dimension, named the Sequential Minimax dimension, that generalizes all existing dimensions in online learning theory and provides upper and lower bounds on the minimax value.

Cite

Text

Raman et al. "A Unified Theory of Supervised Online Learnability." Proceedings of The 36th International Conference on Algorithmic Learning Theory, 2025.

Markdown

[Raman et al. "A Unified Theory of Supervised Online Learnability." Proceedings of The 36th International Conference on Algorithmic Learning Theory, 2025.](https://mlanthology.org/alt/2025/raman2025alt-unified/)

BibTeX

@inproceedings{raman2025alt-unified,
  title     = {{A Unified Theory of Supervised Online Learnability}},
  author    = {Raman, Vinod and Subedi, Unique and Tewari, Ambuj},
  booktitle = {Proceedings of The 36th International Conference on Algorithmic Learning Theory},
  year      = {2025},
  pages     = {985-1007},
  volume    = {272},
  url       = {https://mlanthology.org/alt/2025/raman2025alt-unified/}
}