Optimal Multiclass U-Calibration Error and Beyond
Abstract
We consider the problem of online multiclass U-calibration, where a forecaster aims to make sequential distributional predictions over $K$ classes with low U-calibration error, that is, low regret with respect to all bounded proper losses simultaneously. Kleinberg et al. (2023) developed an algorithm with U-calibration error $\mathcal{O}(K\sqrt{T})$ after $T$ rounds and raised the open question of what the optimal bound is. We resolve this question by showing that the optimal U-calibration error is $\Theta(\sqrt{KT})$ --- we start with a simple observation that the Follow-the-Perturbed-Leader algorithm of Daskalakis and Syrgkanis (2016) achieves this upper bound, followed by a matching lower bound constructed with a specific proper loss (which, as a side result, also proves the optimality of the algorithm of Daskalakis and Syrgkanis (2016) in the context of online learning against an adversary with finite choices). We also strengthen our results under natural assumptions on the loss functions, including $\Theta(\log T)$ U-calibration error for Lipschitz proper losses, $\mathcal{O}(\log T)$ U-calibration error for a certain class of decomposable proper losses, U-calibration error bounds for proper losses with a low covering number, and others.
Cite
Text
Luo et al. "Optimal Multiclass U-Calibration Error and Beyond." Neural Information Processing Systems, 2024. doi:10.52202/079017-0241Markdown
[Luo et al. "Optimal Multiclass U-Calibration Error and Beyond." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/luo2024neurips-optimal/) doi:10.52202/079017-0241BibTeX
@inproceedings{luo2024neurips-optimal,
title = {{Optimal Multiclass U-Calibration Error and Beyond}},
author = {Luo, Haipeng and Senapati, Spandan and Sharan, Vatsal},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-0241},
url = {https://mlanthology.org/neurips/2024/luo2024neurips-optimal/}
}