Multiclass Online Learnability Under Bandit Feedback
Abstract
We study online multiclass classification under bandit feedback. We extend the results of Daniely and Helbertal [2013] by showing that the finiteness of the Bandit Littlestone dimension is necessary and sufficient for bandit online learnability even when the label space is unbounded. Moreover, we show that, unlike the full-information setting, sequential uniform convergence is necessary but not sufficient for bandit online learnability. Our result complements the recent work by Hanneke, Moran, Raman, Subedi, and Tewari [2023] who show that the Littlestone dimension characterizes online multiclass learnability in the full-information setting even when the label space is unbounded.
Cite
Text
Raman et al. "Multiclass Online Learnability Under Bandit Feedback." Proceedings of The 35th International Conference on Algorithmic Learning Theory, 2024.Markdown
[Raman et al. "Multiclass Online Learnability Under Bandit Feedback." Proceedings of The 35th International Conference on Algorithmic Learning Theory, 2024.](https://mlanthology.org/alt/2024/raman2024alt-multiclass/)BibTeX
@inproceedings{raman2024alt-multiclass,
title = {{Multiclass Online Learnability Under Bandit Feedback}},
author = {Raman, Ananth and Raman, Vinod and Subedi, Unique and Mehalel, Idan and Tewari, Ambuj},
booktitle = {Proceedings of The 35th International Conference on Algorithmic Learning Theory},
year = {2024},
pages = {997-1012},
volume = {237},
url = {https://mlanthology.org/alt/2024/raman2024alt-multiclass/}
}